Field array performance: Theoretical study of spatially correlated variations in amplitude coupling and static shift and case study in the Paris Basin

Geophysics ◽  
1989 ◽  
Vol 54 (4) ◽  
pp. 451-459 ◽  
Author(s):  
A. J. Berni ◽  
W. L. Roever

Near‐surface propagation anomalies degrade the performance of field arrays. We studied this problem by modeling the signal detected by a field array. In our model, the signal arrival time and amplitude were each varied with distance along the array according to some arbitrary spatial trend. Given the intensity and the correlation distance of the signal variations, both wavenumber selectivity for noise rejection and frequency response for desired signal can be calculated. We begin by describing diagnostic graphs that show an array’s attainable signal bandwidth and noise rejection capability. Next, we discuss the mathematical relationships between the graphs and observable quantities such as correlations, array lengths, geophone spacing, etc. Exponential correlation functions are used in the modeling study for illustrative purposes. The same diagnostics are then generated from measured correlations derived from experimental data acquired in the Paris Basin with a densely sampled geophone spread. We found that the bandwidth diagnostic was useful and easy to calculate for this data set. Data sets with stronger noise waves should allow an accurate calculation of noise rejection capability. The diagnostic graphs can help in choosing the number of channels, array length, and weighting in a particular exploration area.

Author(s):  
James B. Elsner ◽  
Thomas H. Jagger

Hurricane data originate from careful analysis of past storms by operational meteorologists. The data include estimates of the hurricane position and intensity at 6-hourly intervals. Information related to landfall time, local wind speeds, damages, and deaths, as well as cyclone size, are included. The data are archived by season. Some effort is needed to make the data useful for hurricane climate studies. In this chapter, we describe the data sets used throughout this book. We show you a work flow that includes importing, interpolating, smoothing, and adding attributes. We also show you how to create subsets of the data. Code in this chapter is more complicated and it can take longer to run. You can skip this material on first reading and continue with model building in Chapter 7. You can return here when you have an updated version of the data that includes the most recent years. Most statistical models in this book use the best-track data. Here we describe these data and provide original source material. We also explain how to smooth and interpolate them. Interpolations are needed for regional hurricane analyses. The best-track data set contains the 6-hourly center locations and intensities of all known tropical cyclones across the North Atlantic basin, including the Gulf of Mexico and Caribbean Sea. The data set is called HURDAT for HURricane DATa. It is maintained by the U.S. National Oceanic and Atmospheric Administration (NOAA) at the National Hurricane Center (NHC). Center locations are given in geographic coordinates (in tenths of degrees) and the intensities, representing the one-minute near-surface (∼10 m) wind speeds, are given in knots (1 kt = .5144 m s−1) and the minimum central pressures are given in millibars (1 mb = 1 hPa). The data are provided in 6-hourly intervals starting at 00 UTC (Universal Time Coordinate). The version of HURDAT file used here contains cyclones over the period 1851 through 2010 inclusive. Information on the history and origin of these data is found in Jarvinen et al (1984). The file has a logical structure that makes it easy to read with a FORTRAN program. Each cyclone contains a header record, a series of data records, and a trailer record.


2016 ◽  
Vol 16 (11) ◽  
pp. 6977-6995 ◽  
Author(s):  
Jean-Pierre Chaboureau ◽  
Cyrille Flamant ◽  
Thibaut Dauhut ◽  
Cécile Kocha ◽  
Jean-Philippe Lafore ◽  
...  

Abstract. In the framework of the Fennec international programme, a field campaign was conducted in June 2011 over the western Sahara. It led to the first observational data set ever obtained that documents the dynamics, thermodynamics and composition of the Saharan atmospheric boundary layer (SABL) under the influence of the heat low. In support to the aircraft operation, four dust forecasts were run daily at low and high resolutions with convection-parameterizing and convection-permitting models, respectively. The unique airborne and ground-based data sets allowed the first ever intercomparison of dust forecasts over the western Sahara. At monthly scale, large aerosol optical depths (AODs) were forecast over the Sahara, a feature observed by satellite retrievals but with different magnitudes. The AOD intensity was correctly predicted by the high-resolution models, while it was underestimated by the low-resolution models. This was partly because of the generation of strong near-surface wind associated with thunderstorm-related density currents that could only be reproduced by models representing convection explicitly. Such models yield emissions mainly in the afternoon that dominate the total emission over the western fringes of the Adrar des Iforas and the Aïr Mountains in the high-resolution forecasts. Over the western Sahara, where the harmattan contributes up to 80 % of dust emission, all the models were successful in forecasting the deep well-mixed SABL. Some of them, however, missed the large near-surface dust concentration generated by density currents and low-level winds. This feature, observed repeatedly by the airborne lidar, was partly forecast by one high-resolution model only.


Geophysics ◽  
2020 ◽  
pp. 1-41 ◽  
Author(s):  
Jens Tronicke ◽  
Niklas Allroggen ◽  
Felix Biermann ◽  
Florian Fanselow ◽  
Julien Guillemoteau ◽  
...  

In near-surface geophysics, ground-based mapping surveys are routinely employed in a variety of applications including those from archaeology, civil engineering, hydrology, and soil science. The resulting geophysical anomaly maps of, for example, magnetic or electrical parameters are usually interpreted to laterally delineate subsurface structures such as those related to the remains of past human activities, subsurface utilities and other installations, hydrological properties, or different soil types. To ease the interpretation of such data sets, we propose a multi-scale processing, analysis, and visualization strategy. Our approach relies on a discrete redundant wavelet transform (RWT) implemented using cubic-spline filters and the à trous algorithm, which allows to efficiently compute a multi-scale decomposition of 2D data using a series of 1D convolutions. The basic idea of the approach is presented using a synthetic test image, while our archaeo-geophysical case study from North-East Germany demonstrates its potential to analyze and process rather typical geophysical anomaly maps including magnetic and topographic data. Our vertical-gradient magnetic data show amplitude variations over several orders of magnitude, complex anomaly patterns at various spatial scales, and typical noise patterns, while our topographic data show a distinct hill structure superimposed by a microtopographic stripe pattern and random noise. Our results demonstrate that the RWT approach is capable to successfully separate these components and that selected wavelet planes can be scaled and combined so that the reconstructed images allow for a detailed, multi-scale structural interpretation also using integrated visualizations of magnetic and topographic data. Because our analysis approach is straightforward to implement without laborious parameter testing and tuning, computationally efficient, and easily adaptable to other geophysical data sets, we believe that it can help to rapidly analyze and interpret different geophysical mapping data collected to address a variety of near-surface applications from engineering practice and research.


2015 ◽  
Vol 8 (8) ◽  
pp. 2645-2653 ◽  
Author(s):  
C. G. Nunalee ◽  
Á. Horváth ◽  
S. Basu

Abstract. Recent decades have witnessed a drastic increase in the fidelity of numerical weather prediction (NWP) modeling. Currently, both research-grade and operational NWP models regularly perform simulations with horizontal grid spacings as fine as 1 km. This migration towards higher resolution potentially improves NWP model solutions by increasing the resolvability of mesoscale processes and reducing dependency on empirical physics parameterizations. However, at the same time, the accuracy of high-resolution simulations, particularly in the atmospheric boundary layer (ABL), is also sensitive to orographic forcing which can have significant variability on the same spatial scale as, or smaller than, NWP model grids. Despite this sensitivity, many high-resolution atmospheric simulations do not consider uncertainty with respect to selection of static terrain height data set. In this paper, we use the Weather Research and Forecasting (WRF) model to simulate realistic cases of lower tropospheric flow over and downstream of mountainous islands using the default global 30 s United States Geographic Survey terrain height data set (GTOPO30), the Shuttle Radar Topography Mission (SRTM), and the Global Multi-resolution Terrain Elevation Data set (GMTED2010) terrain height data sets. While the differences between the SRTM-based and GMTED2010-based simulations are extremely small, the GTOPO30-based simulations differ significantly. Our results demonstrate cases where the differences between the source terrain data sets are significant enough to produce entirely different orographic wake mechanics, such as vortex shedding vs. no vortex shedding. These results are also compared to MODIS visible satellite imagery and ASCAT near-surface wind retrievals. Collectively, these results highlight the importance of utilizing accurate static orographic boundary conditions when running high-resolution mesoscale models.


2020 ◽  
Vol 39 (5) ◽  
pp. 324-331
Author(s):  
Gary Murphy ◽  
Vanessa Brown ◽  
Denes Vigh

As part of a wide-reaching full-waveform inversion (FWI) research program, FWI is applied to an onshore seismic data set collected in the Delaware Basin, west Texas. FWI is routinely applied on typical marine data sets with high signal-to-noise ratio (S/N), relatively good low-frequency content, and reasonably long offsets. Land seismic data sets, in comparison, present significant challenges for FWI due to low S/N, a dearth of low frequencies, and limited offsets. Recent advancements in FWI overcome limitations due to poor S/N and low frequencies making land FWI feasible to use to update the shallow velocities. The chosen area has contrasting and variable near-surface conditions providing an excellent test data set on which to demonstrate the workflow and its challenges. An acoustic FWI workflow is used to update the near-surface velocity model in order to improve the deeper image and simultaneously help highlight potential shallow drilling hazards.


Geophysics ◽  
2014 ◽  
Vol 79 (6) ◽  
pp. B243-B252 ◽  
Author(s):  
Peter Bergmann ◽  
Artem Kashubin ◽  
Monika Ivandic ◽  
Stefan Lüth ◽  
Christopher Juhlin

A method for static correction of time-lapse differences in reflection arrival times of time-lapse prestack seismic data is presented. These arrival-time differences are typically caused by changes in the near-surface velocities between the acquisitions and had a detrimental impact on time-lapse seismic imaging. Trace-to-trace time shifts of the data sets from different vintages are determined by crosscorrelations. The time shifts are decomposed in a surface-consistent manner, which yields static corrections that tie the repeat data to the baseline data. Hence, this approach implies that new refraction static corrections for the repeat data sets are unnecessary. The approach is demonstrated on a 4D seismic data set from the Ketzin [Formula: see text] pilot storage site, Germany, and is compared with the result of an initial processing that was based on separate refraction static corrections. It is shown that the time-lapse difference static correction approach reduces 4D noise more effectively than separate refraction static corrections and is significantly less labor intensive.


Geophysics ◽  
2016 ◽  
Vol 81 (4) ◽  
pp. U39-U49 ◽  
Author(s):  
Daniele Colombo ◽  
Federico Miorelli ◽  
Ernesto Sandoval ◽  
Kevin Erickson

Industry practices for near-surface analysis indicate difficulties in coping with the increased number of channels in seismic acquisition systems, and new approaches are needed to fully exploit the resolution embedded in modern seismic data sets. To achieve this goal, we have developed a novel surface-consistent refraction analysis method for low-relief geology to automatically derive near-surface corrections for seismic data processing. The method uses concepts from surface-consistent analysis applied to refracted arrivals. The key aspects of the method consist of the use of common midpoint (CMP)-offset-azimuth binning, evaluation of mean traveltime and standard deviation for each bin, rejection of anomalous first-break (FB) picks, derivation of CMP-based traveltime-offset functions, conversion to velocity-depth functions, evaluation of long-wavelength statics, and calculation of surface-consistent residual statics through waveform crosscorrelation. Residual time lags are evaluated in multiple CMP-offset-azimuth bins by crosscorrelating a pilot trace with all the other traces in the gather in which the correlation window is centered at the refracted arrival. The residuals are then used to build a system of linear equations that is simultaneously inverted for surface-consistent shot and receiver time shift corrections plus a possible subsurface residual term. All the steps are completely automated and require a fraction of the time needed for conventional near-surface analysis. The developed methodology was successfully performed on a complex 3D land data set from Central Saudi Arabia where it was benchmarked against a conventional tomographic work flow. The results indicate that the new surface-consistent refraction statics method enhances seismic imaging especially in portions of the survey dominated by noise.


Geophysics ◽  
2014 ◽  
Vol 79 (4) ◽  
pp. B173-B185 ◽  
Author(s):  
Michael S. McMillan ◽  
Douglas W. Oldenburg

We evaluated a method for cooperatively inverting multiple electromagnetic (EM) data sets with bound constraints to produce a consistent 3D resistivity model with improved resolution. Field data from the Antonio gold deposit in Peru and synthetic data were used to demonstrate this technique. We first separately inverted field airborne time-domain EM (AEM), controlled-source audio-frequency magnetotellurics (CSAMT), and direct current resistivity measurements. Each individual inversion recovered a resistor related to gold-hosted silica alteration within a relatively conductive background. The outline of the resistor in each inversion was in reasonable agreement with the mapped extent of known near-surface silica alteration. Variations between resistor recoveries in each 3D inversion model motivated a subsequent cooperative method, in which AEM data were inverted sequentially with a combined CSAMT and DC data set. This cooperative approach was first applied to a synthetic inversion over an Antonio-like simulated resistivity model, and the inversion result was both qualitatively and quantitatively closer to the true synthetic model compared to individual inversions. Using the same cooperative method, field data were inverted to produce a model that defined the target resistor while agreeing with all data sets. To test the benefit of borehole constraints, synthetic boreholes were added to the inversion as upper and lower bounds at locations of existing boreholes. The ensuing cooperative constrained synthetic inversion model had the closest match to the true simulated resistivity distribution. Bound constraints from field boreholes were then calculated by a regression relationship among the total sulfur content, alteration type, and resistivity measurements from rock samples and incorporated into the inversion. The resulting cooperative constrained field inversion model clearly imaged the resistive silica zone, extended the area of interpreted alteration, and also highlighted conductive zones within the resistive region potentially linked to sulfide and gold mineralization.


2016 ◽  
Author(s):  
J.-P. Chaboureau ◽  
C. Flamant ◽  
T. Dauhut ◽  
C. Kocha ◽  
J.-P. Lafore ◽  
...  

Abstract. In the framework of the Fennec international programme, a field campaign was conducted in June 2011 over the western Sahara. It led to the first observational data set ever obtained that documents the dynamics, thermodynamics and composition of the Saharan atmospheric boundary layer (SABL) under the influence of the heat low. In support to the aircraft operation, four dust forecasts were run daily at low and high resolutions with convection-parameterizing and convection- permitting models, respectively. The unique airborne and ground-based data sets allowed the first ever intercomparison of dust forecasts over the western Sahara. At monthly scale, large Aerosol Optical Depths (AODs) were forecast over the Sahara, a feature observed by some satellite retrievals but mislocated by others over the Sahel. The AOD intensity was correctly predicted by the highresolution models while being underestimated by the low-resolution models. This was partly because of the generation of strong near-surface wind associated with thunderstorm-related density currents that could only be reproduced by models representing convection explicitly. Such models yield to emissions mainly in the afternoon that dominate the total emission over the western fringes of the Adrar des Iforas and Aïr Mountains in the high-resolution forecasts. Over the western Sahara, where the harmattan contributes up to 80 % of dust emission, all the models were successful in forecasting the deep well-mixed SABL. Some of them, however, missed the large near-surface dust extinction generated by density currents and low-level winds. This feature, observed repeatedly by the airborne lidar, was partly forecast by one high-resolution model only.


Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. S197-S205 ◽  
Author(s):  
Zhaolun Liu ◽  
Abdullah AlTheyab ◽  
Sherif M. Hanafy ◽  
Gerard Schuster

We have developed a methodology for detecting the presence of near-surface heterogeneities by naturally migrating backscattered surface waves in controlled-source data. The near-surface heterogeneities must be located within a depth of approximately one-third the dominant wavelength [Formula: see text] of the strong surface-wave arrivals. This natural migration method does not require knowledge of the near-surface phase-velocity distribution because it uses the recorded data to approximate the Green’s functions for migration. Prior to migration, the backscattered data are separated from the original records, and the band-passed filtered data are migrated to give an estimate of the migration image at a depth of approximately one-third [Formula: see text]. Each band-passed data set gives a migration image at a different depth. Results with synthetic data and field data recorded over known faults validate the effectiveness of this method. Migrating the surface waves in recorded 2D and 3D data sets accurately reveals the locations of known faults. The limitation of this method is that it requires a dense array of receivers with a geophone interval less than approximately one-half [Formula: see text].


Sign in / Sign up

Export Citation Format

Share Document