NTH‐ROOT STACK NONLINEAR MULTICHANNEL FILTER

Geophysics ◽  
1973 ◽  
Vol 38 (2) ◽  
pp. 327-338 ◽  
Author(s):  
E. R. Kanasewich ◽  
C. D. Hemmings ◽  
T. Alpaslan

A nonlinear multichannel filter is developed which appears to be particularly useful for enhancement of seismic refraction and teleseismic array data. The basic filter involves the extraction of the Nth root of each element in the matrix forming the data set, where N is any positive integer, and the Nth power of the summation over the channels. The filter is effective in reducing random noise, whereas identical signals which are in‐phase on all channels are retained at the expense of some distortion. The output from this nonlinear filter has far greater resolution in specifying phase velocity than any multichannel linear filter we have employed. Examples of theoretical and actual field seismograms are presented after various forms of filtering to illustrate their effectiveness.

2016 ◽  
Vol 47 (5) ◽  
pp. 919-931 ◽  
Author(s):  
A. Ufuk Şahin ◽  
Emin Çiftçi

A new parameter estimation methodology was established for the interpretation of the transient constant-head test to identify the hydrogeological parameters of an aquifer. The proposed method, referred as the area matching process (AMP), is based on linking the field data to the theoretical type curve through a unique area computed above these curves bounded by a user specified integration interval. The proposed method removes the need of superimposition of theoretical type curves and field data collected during the test, which may lead to the unexpected errors in assessing aquifer parameters. The AMP approach was implemented for a number of synthetically generated hypothetical test data augmented with several random noise levels, which mimic the uncertainty in site measurement together with porous media heterogeneity, and to an actual field data set available in the literature. The estimation performance of the AMP method was also compared with the existing traditional and recently developed techniques. As demonstrated by the conducted test results, the accuracy, reliability, robustness and simplicity of the proposed technique provide significant flexibility in field applications.


2021 ◽  
Vol 15 ◽  
pp. 174830262199962
Author(s):  
Patrick O Kano ◽  
Moysey Brio ◽  
Jacob Bailey

The Weeks method for the numerical inversion of the Laplace transform utilizes a Möbius transformation which is parameterized by two real quantities, σ and b. Proper selection of these parameters depends highly on the Laplace space function F( s) and is generally a nontrivial task. In this paper, a convolutional neural network is trained to determine optimal values for these parameters for the specific case of the matrix exponential. The matrix exponential eA is estimated by numerically inverting the corresponding resolvent matrix [Formula: see text] via the Weeks method at [Formula: see text] pairs provided by the network. For illustration, classes of square real matrices of size three to six are studied. For these small matrices, the Cayley-Hamilton theorem and rational approximations can be utilized to obtain values to compare with the results from the network derived estimates. The network learned by minimizing the error of the matrix exponentials from the Weeks method over a large data set spanning [Formula: see text] pairs. Network training using the Jacobi identity as a metric was found to yield a self-contained approach that does not require a truth matrix exponential for comparison.


1992 ◽  
Vol 29 (7) ◽  
pp. 1492-1508 ◽  
Author(s):  
S. A. Dehler ◽  
R. M. Clowes

An integrated geophysical data set has been used to develop structural models across the continental margin west of Vancouver Island, Canada. A modern accretionary complex underlies the continental slope and shelf and rests against and below the allochthonous Crescent and Pacific Rim terranes. These terranes in turn abut against the pre-Tertiary Wrangellia terrane that constitutes most of the island. Gravity and magnetic anomaly data, constrained by seismic reflection, seismic refraction, and other data, were interpreted to determine the offshore positions of these terranes and related features. Iterative 2.5-dimensional forward models of anomaly profiles were stepped laterally along the margin to extend areal coverage over a 70 km wide swath oriented normal to the tectonic features. An average model was then developed to represent this part of the margin. The Pacific Rim terrane appears to be continuous and close to the coastline along the length of Vancouver Island, consistent with emplacement by strike-slip motion along the margin. The Westcoast fault, the boundary between the Pacific Rim and Wrangellia terranes, is interpreted to be 15 km farther seaward than in previous interpretations in the region of Barkley Sound. The Crescent terrane forms a thin landward-dipping slab along the southern half of the Vancouver Island margin, and cannot be confirmed along the northern part. Model results suggest the slab has buckled into an anticline beneath southern Vancouver Island and Juan de Fuca Strait, uplifting high-density lower crustal or upper mantle material close to the surface to produce the observed intense positive gravity anomaly. This geometry is consistent with emplacement of the Crescent terrane by oblique subduction.


2019 ◽  
Vol 217 (3) ◽  
pp. 1727-1741 ◽  
Author(s):  
D W Vasco ◽  
Seiji Nakagawa ◽  
Petr Petrov ◽  
Greg Newman

SUMMARY We introduce a new approach for locating earthquakes using arrival times derived from waveforms. The most costly computational step of the algorithm scales as the number of stations in the active seismographic network. In this approach, a variation on existing grid search methods, a series of full waveform simulations are conducted for all receiver locations, with sources positioned successively at each station. The traveltime field over the region of interest is calculated by applying a phase picking algorithm to the numerical wavefields produced from each simulation. An event is located by subtracting the stored traveltime field from the arrival time at each station. This provides a shifted and time-reversed traveltime field for each station. The shifted and time-reversed fields all approach the origin time of the event at the source location. The mean or median value at the source location thus approximates the event origin time. Measures of dispersion about this mean or median time at each grid point, such as the sample standard error and the average deviation, are minimized at the correct source position. Uncertainty in the event position is provided by the contours of standard error defined over the grid. An application of this technique to a synthetic data set indicates that the approach provides stable locations even when the traveltimes are contaminated by additive random noise containing a significant number of outliers and velocity model errors. It is found that the waveform-based method out-performs one based upon the eikonal equation for a velocity model with rapid spatial variations in properties due to layering. A comparison with conventional location algorithms in both a laboratory and field setting demonstrates that the technique performs at least as well as existing techniques.


Geophysics ◽  
2000 ◽  
Vol 65 (5) ◽  
pp. 1446-1454 ◽  
Author(s):  
Side Jin ◽  
G. Cambois ◽  
C. Vuillermoz

S-wave velocity and density information is crucial for hydrocarbon detection, because they help in the discrimination of pore filling fluids. Unfortunately, these two parameters cannot be accurately resolved from conventional P-wave marine data. Recent developments in ocean‐bottom seismic (OBS) technology make it possible to acquire high quality S-wave data in marine environments. The use of (S)-waves for amplitude variation with offset (AVO) analysis can give better estimates of S-wave velocity and density contrasts. Like P-wave AVO, S-wave AVO is sensitive to various types of noise. We investigate numerically and analytically the sensitivity of AVO inversion to random noise and errors in angles of incidence. Synthetic examples show that random noise and angle errors can strongly bias the parameter estimation. The use of singular value decomposition offers a simple stabilization scheme to solve for the elastic parameters. The AVO inversion is applied to an OBS data set from the North Sea. Special prestack processing techniques are required for the success of S-wave AVO inversion. The derived S-wave velocity and density contrasts help in detecting the fluid contacts and delineating the extent of the reservoir sand.


2019 ◽  
Author(s):  
Lin Fei ◽  
Yang Yang ◽  
Wang Shihua ◽  
Xu Yudi ◽  
Ma Hong

Unreasonable public bicycle dispatching area division seriously affects the operational efficiency of the public bicycle system. To solve this problem, this paper innovatively proposes an improved community discovery algorithm based on multi-objective optimization (CDoMO). The data set is preprocessed into a lease/return relationship, thereby it calculated a similarity matrix, and the community discovery algorithm Fast Unfolding is executed on the matrix to obtain a scheduling scheme. For the results obtained by the algorithm, the workload indicators (scheduled distance, number of sites, and number of scheduling bicycles) should be adjusted to maximize the overall benefits, and the entire process is continuously optimized by a multi-objective optimization algorithm NSGA2. The experimental results show that compared with the clustering algorithm and the community discovery algorithm, the method can shorten the estimated scheduling distance by 20%-50%, and can effectively balance the scheduling workload of each area. The method can provide theoretical support for the public bicycle dispatching department, and improve the efficiency of public bicycle dispatching system.


Author(s):  
A. S. Oke ◽  
S. M. Akintewe ◽  
A. G. Akinbande

A Generalised Euclidean Least Square Approximation (ELS) is derived in this paper. The Generalised Euclidean Least Square Approximation is derived by generalizing the interpolation of n arbitrary data set to approximate functions. Existence and uniqueness of the ELS schemes are shown by establishing the invertibility of the coefficient matrix using condensation method to reduce the matrix. The method is illustrated for exponential function and the results are compared to the classical Maclaurin’s series.


Geophysics ◽  
1964 ◽  
Vol 29 (5) ◽  
pp. 783-805 ◽  
Author(s):  
William A. Schneider ◽  
Kenneth L. Larner ◽  
J. P. Burg ◽  
Milo M. Backus

A new data‐processing technique is presented for the separation of initially up‐traveling (ghost) energy from initially down‐traveling (primary) energy on reflection seismograms. The method combines records from two or more shot depths after prefiltering each record with a different filter. The filters are designed on a least‐mean‐square‐error criterion to extract primary reflections in the presence of ghost reflections and random noise. Filter design is dependent only on the difference in uphole time between shots, and is independent of the details of near‐surface layering. The method achieves wide‐band separation of primary and ghost energy, which results in 10–15 db greater attenuation of ghost reflections than can be achieved with conventional two‐ or three‐shot stacking (no prefiltering) for ghost elimination. The technique is illustrated in terms of both synthetic and field examples. The deghosted field data are used to study the near‐surface reflection response by computing the optimum linear filter to transform the deghosted trace back into the original ghosted trace. The impulse response of this filter embodies the effects of the near‐surface on the reflection seismogram, i.e. the cause of the ghosting. Analysis of these filters reveals that the ghosting mechanism in the field test area consists of both a surface‐ and base‐of‐weathering layer reflector.


Geophysics ◽  
1993 ◽  
Vol 58 (5) ◽  
pp. 713-719 ◽  
Author(s):  
Ghassan I. Al‐Eqabi ◽  
Robert B. Herrmann

The objective of this study is to demonstrate that a laterally varying shallow S‐wave structure, derived from the dispersion of the ground roll, can explain observed lateral variations in the direct S‐wave arrival. The data set consists of multichannel seismic refraction data from a USGS-GSC survey in the state of Maine and the province of Quebec. These data exhibit significant lateral changes in the moveout of the ground‐roll as well as the S‐wave first arrivals. A sequence of surface‐wave processing steps are used to obtain a final laterally varying S‐wave velocity model. These steps include visual examination of the data, stacking, waveform inversion of selected traces, phase velocity adjustment by crosscorrelation, and phase velocity inversion. These models are used to predict the S‐wave first arrivals by using two‐dimensional (2D) ray tracing techniques. Observed and calculated S‐wave arrivals match well over 30 km long data paths, where lateral variations in the S‐wave velocity in the upper 1–2 km are as much as ±8 percent. The modeled correlation between the lateral variations in the ground‐roll and S‐wave arrival demonstrates that a laterally varying structure can be constrained by using surface‐wave data. The application of this technique to data from shorter spreads and shallower depths is discussed.


1979 ◽  
Vol 69 (5) ◽  
pp. 1455-1473
Author(s):  
D. N. Whitcombe ◽  
P. K. H. Maguire

abstract The time-term method of interpreting seismic refraction data is analyzed to examine inadequacies in the chosen time-term model by relating observational errors to the solution variance. The results obtained from data that has been simulated for various structures are investigated. This is done quantitatively for simple structures and semi-quantitatively for more complex cases. Velocity and topographic variations of the refractor are considered as signals having dominant wavelengths. It is found that the response of the time-term method to these structural variations depends on the relationship of the structural wavelength to the dimensions of the experiment and the critical distance. For all but the simplest structures, the standard error estimates that can be obtained from a time-term solution are likely to be completely inadequate as estimates of the true error. It is demonstrated that if the refractor is anything other than uniform, the effects of a complicated velocity structure may be absorbed into the time terms. Similarly it is argued that in situations in which the refractor is not horizontal, erroneous values for complex velocity coefficients (e.g., gradient, anisotropy, etc.) can be obtained if these coefficients are included in the chosen time-term model. Finally, it is indicated that reduced travel times can be used in a way that removes the “stirring pot” aspect of time-term analysis, and to determine if a data set is suitable for examination by the time-term method.


Sign in / Sign up

Export Citation Format

Share Document