Factors affecting spatial resolution

Geophysics ◽  
1999 ◽  
Vol 64 (3) ◽  
pp. 942-953 ◽  
Author(s):  
Gijs J. O. Vermeer

The theory of spatial resolution has been well‐established in various papers dealing with inversion and prestack migration. Nevertheless, there is a continuing flow of papers being published on spatial resolution, in particular in relation to spatial sampling. This paper continues the discussion, and deals with various factors affecting spatial resolution. The theoretical best‐possible resolution can be predicted using Beylkin’s formula. This formula gives answers on the effect on resolution of frequency, aperture, offset, and acquisition geometry. In this paper, these factors are investigated using a single diffractor in a constant‐velocity medium. Some simple resolution formulas are derived for 2-D zero‐offset data. These formulas are similar to formulas derived elsewhere in a more heuristic way, and which are in common use in the industry. The formulas are extended to 2-D common‐offset data. The width of the spatial wavelet resulting from migration of the diffraction event is compared with the resolution predicted from Beylkin’s formula for various 3-D single‐fold data sets. The measured widths confirm the theoretical prediction that zero‐offset data produce the best possible resolution and 3-D shots the worst, with common‐offset gathers and cross‐spreads scoring intermediate. The effects of sampling and fold cannot be derived directly from Beylkin’s formula; these effects are analyzed by looking at the migration noise rather than at the width of the spatial wavelet. Random coarse sampling may produce somewhat less migration noise than regular coarse sampling, though it cannot be as good as regular dense sampling. The bin‐fractionation technique (which achieves finer midpoint sampling without changing the station spacings) does not achieve higher resolution than conventional sampling. Generally speaking, increasing fold does not improve the theoretically best possible resolution. However, as noise always has a detrimental effect on the resolvability of events, fold—by reducing noise—will improve resolution in practice. This also applies to migration noise as a product of coarse sampling.

Geophysics ◽  
1996 ◽  
Vol 61 (6) ◽  
pp. 1822-1832 ◽  
Author(s):  
Biondo Biondi ◽  
Gopal Palacharla

In principle, downward continuation of 3-D prestack data should be carried out in the 5-D space of full 3-D prestack geometry (recording time, source surface location, and receiver surface location), even when the data sets to be migrated have fewer dimensions, as in the case of common‐azimuth data sets that are only four dimensional. This increase in dimensionality of the computational space causes a severe increase in the amount of computations required for migrating the data. Unless this computational efficiency issue is solved, 3-D prestack migration methods based on downward continuation cannot compete with Kirchhoff methods. We address this problem by presenting a method for downward continuing common‐azimuth data in the original 4-D space of the common‐azimuth data geometry. The method is based on a new common‐azimuth downward‐continuation operator derived by a stationary‐phase approximation of the full 3-D prestack downward‐continuation operator expressed in the frequency‐wavenumber domain. Although the new common‐azimuth operator is exact only for constant velocity, a ray‐theoretical interpretation of the stationary‐phase approximation enables us to derive an accurate generalization of the method to media with both vertical and lateral velocity variations. The proposed migration method successfully imaged a synthetic data set that was generated assuming strong lateral and vertical velocity gradients. The common‐azimuth downward‐continuation theory also can be applied to the derivation of a computationally efficient constant‐velocity Stolt migration of common‐azimuth data. The Stolt migration formulation leads to the important theoretical result that constant‐velocity common‐azimuth migration can be split into two exact sequential migration processes: 2-D prestack migration along the inline direction, followed by 2-D zero‐offset migration along the cross‐line direction.


Geophysics ◽  
1988 ◽  
Vol 53 (5) ◽  
pp. 604-610 ◽  
Author(s):  
David Forel ◽  
Gerald H. F. Gardner

Prestack migration in a constant‐velocity medium spreads an impulse on any trace over an ellipsoidal surface with foci at the source and receiver positions for that trace. The same ellipsoid can be obtained by migrating a family of zero‐offset traces placed along the line segment from the source to the receiver. The spheres generated by migrating the zero‐offset impulses are arranged to be tangent to the ellipsoid. The resulting nonstandard moveout equation is equivalent to two consecutive moveouts, the first requiring no knowledge of velocity and the second being standard normal moveout (NMO). The first of these is referred to as dip moveout (DMO). Because this DMO-NMO algorithm converts any trace to an equivalent set of zero‐offset traces, it can be applied to any ensemble of traces no matter what the variations in azimuth and offset may be. In particular, this three‐dimensional perspective on DMO can be used with multifold inline data. Then it becomes clear that velocity‐independent DMO operates on radial‐trace profiles and not on constant‐offset profiles. Inline data over a three‐dimensional subsurface will be properly stacked by using DMO followed by NMO.


Geophysics ◽  
1983 ◽  
Vol 48 (11) ◽  
pp. 1514-1524 ◽  
Author(s):  
Edip Baysal ◽  
Dan D. Kosloff ◽  
John W. C. Sherwood

Migration of stacked or zero‐offset sections is based on deriving the wave amplitude in space from wave field observations at the surface. Conventionally this calculation has been carried out through a depth extrapolation. We examine the alternative of carrying out the migration through a reverse time extrapolation. This approach may offer improvements over existing migration methods, especially in cases of steeply dipping structures with strong velocity contrasts. This migration method is tested using appropriate synthetic data sets.


Geophysics ◽  
2001 ◽  
Vol 66 (3) ◽  
pp. 845-860 ◽  
Author(s):  
François Clément ◽  
Guy Chavent ◽  
Susana Gómez

Migration‐based traveltime (MBTT) formulation provides algorithms for automatically determining background velocities from full‐waveform surface seismic reflection data using local optimization methods. In particular, it addresses the difficulty of the nonconvexity of the least‐squares data misfit function. The method consists of parameterizing the reflectivity in the time domain through a migration step and providing a multiscale representation for the smooth background velocity. We present an implementation of the MBTT approach for a 2-D finite‐difference (FD) full‐wave acoustic model. Numerical analysis on a 2-D synthetic example shows the ability of the method to find much more reliable estimates of both long and short wavelengths of the velocity than the classical least‐squares approach, even when starting from very poor initial guesses. This enlargement of the domain of attraction for the global minima of the least‐squares misfit has a price: each evaluation of the new objective function requires, besides the usual FD full‐wave forward modeling, an additional full‐wave prestack migration. Hence, the FD implementation of the MBTT approach presented in this paper is expected to provide a useful tool for the inversion of data sets of moderate size.


2021 ◽  
Vol 38 (2) ◽  
Author(s):  
Nicholas Torres Okita ◽  
Tiago A. Coimbra ◽  
José Ribeiro ◽  
Martin Tygel

ABSTRACT. The usage of graphics processing units is already known as an alternative to traditional multi-core CPU processing, offering faster performance in the order of dozens of times in parallel tasks. Another new computing paradigm is cloud computing usage as a replacement to traditional in-house clusters, enabling seemingly unlimited computation power, no maintenance costs, and cutting-edge technology, dynamically on user demand. Previously those two tools were used to accelerate the estimation of Common Reflection Surface (CRS) traveltime parameters, both in zero-offset and finite-offset domain, delivering very satisfactory results with large time savings from GPU devices alongside cost savings on the cloud. This work extends those results by using GPUs on the cloud to accelerate the Offset Continuation Trajectory (OCT) traveltime parameter estimation. The results have shown that the time and cost savings from GPU devices’ usage are even larger than those seen in the CRS results, being up to fifty times faster and sixty times cheaper. This analysis reaffirms that it is possible to save both time and money when using GPU devices on the cloud and concludes that the larger the data sets are and the more computationally intensive the traveltime operators are, we can see larger improvements.Keywords: cloud computing, GPU, seismic processing. Estendendo o uso de placas gráficas na nuvem para economias em regularização de dados sísmicosRESUMO. O uso de aceleradores gráficos para processamento já é uma alternativa conhecida ao uso de CPUs multi-cores, oferecendo um desempenho na ordem de dezenas de vezes mais rápido em tarefas paralelas. Outro novo paradigma de computação é o uso da nuvem computacional como substituta para os tradicionais clusters internos, possibilitando o uso de um poder computacional aparentemente infinito sem custo de manutenção e com tecnologia de ponta, dinamicamente sob demanda de usuário. Anteriormente essas duas ferramentas foram utilizadas para acelerar a estimação de parâmetros do tempo de trânsito de Common Reflection Surface (CRS), tanto em zero-offset quanto em offsets finitos, obtendo resultados satisfatórios com amplas economias tanto de tempo quanto de dinheiro na nuvem. Este trabalho estende os resultados obtidos anteriormente, desta vez utilizando GPUs na nuvem para acelerar a estimação de parâmetros do tempo de trânsito em Offset Continuation Trajectory (OCT). Os resultados obtidos mostraram que as economias de tempo e dinheiro foram ainda maiores do que aquelas obtidas no CRS, sendo até cinquenta vezes mais rápido e sessenta vezes mais barato. Esta análise reafirma que é possível economizar tanto tempo quanto dinheiro usando GPUs na nuvem, e conclui que quanto maior for o dado e quanto mais computacionalmente intenso for o operador, maiores serão os ganhos de desempenho observados e economias.Palavras-chave: computação em nuvem, GPU, processamento sísmico. 


Author(s):  
R. R. Colditz ◽  
R. M. Llamas ◽  
R. A. Ressl

Change detection is one of the most important and widely requested applications of terrestrial remote sensing. Despite a wealth of techniques and successful studies, there is still a need for research in remote sensing science. This paper addresses two important issues: the temporal and spatial scales of change maps. Temporal scales relate to the time interval between observations for successful change detection. We compare annual change detection maps accumulated over five years against direct change detection over that period. Spatial scales relate to the spatial resolution of remote sensing products. We compare fractions from 30m Landsat change maps to 250m grid cells that match MODIS change products. Results suggest that change detection at annual scales better detect abrupt changes, in particular those that do not persist over a longer period. The analysis across spatial scales strongly recommends the use of an appropriate analysis technique, such as change fractions from fine spatial resolution data for comparison with coarse spatial resolution maps. Plotting those results in bi-dimensional error space and analyzing various criteria, the “lowest cost”, according to a user defined (here hyperbolic) cost function, was found most useful. In general, we found a poor match between Landsat and MODIS-based change maps which, besides obvious differences in the capabilities to detect change, is likely related to change detection errors in both data sets.


2021 ◽  
Author(s):  
Martha Frysztacki ◽  
Jonas Hörsch ◽  
Veit Hagenmeyer ◽  
Tom Brown

<p>Energy systems are typically modeled with a low spatial resolution that is based on administrative boundaries such as countries, which eases data collection and reduces computation times. However, a low spatial resolution can lead to sub-optimal investment decisions for renewable generation, transmission expansion or both. Ignoring power grid bottlenecks within regions tends to underestimate system costs, while combining locations with different renewable capacity factors tends to overestimate costs. We investigate these two competing effects in a capacity expansion model for Europe’s future power system that reduces carbon emissions by 95% compared to 1990s levels, taking advantage of newly-available high-resolution data sets and computational advances. We vary the model resolution by changing the number of substations, interpolating between a 37-node model where every country and synchronous zone is modeled with one node respectively, and a 512-node model based on the location of electricity substations. If we focus on the effect of renewable resource resolution and ignore network restrictions, we find that a higher resolution allows the optimal solution to concentrate wind and solar capacity at sites with higher capacity factors and thus reduces system costs by up to 10.5% compared to a low resolution model. This results in a big swing from offshore to onshore wind investment. However, if we introduce grid bottlenecks by raising the network resolution, costs increase by up to 19% as generation has to be sourced more locally where demand is high, typically at sites with worse capacity factors. These effects are most pronounced in scenarios where transmission expansion is limited, for example, by low social acceptance.</p>


2017 ◽  
Vol 10 (5) ◽  
pp. 1665-1688 ◽  
Author(s):  
Frederik Tack ◽  
Alexis Merlaud ◽  
Marian-Daniel Iordache ◽  
Thomas Danckaert ◽  
Huan Yu ◽  
...  

Abstract. We present retrieval results of tropospheric nitrogen dioxide (NO2) vertical column densities (VCDs), mapped at high spatial resolution over three Belgian cities, based on the DOAS analysis of Airborne Prism EXperiment (APEX) observations. APEX, developed by a Swiss-Belgian consortium on behalf of ESA (European Space Agency), is a pushbroom hyperspectral imager characterised by a high spatial resolution and high spectral performance. APEX data have been acquired under clear-sky conditions over the two largest and most heavily polluted Belgian cities, i.e. Antwerp and Brussels on 15 April and 30 June 2015. Additionally, a number of background sites have been covered for the reference spectra. The APEX instrument was mounted in a Dornier DO-228 aeroplane, operated by Deutsches Zentrum für Luft- und Raumfahrt (DLR). NO2 VCDs were retrieved from spatially aggregated radiance spectra allowing urban plumes to be resolved at the resolution of 60  ×  80 m2. The main sources in the Antwerp area appear to be related to the (petro)chemical industry while traffic-related emissions dominate in Brussels. The NO2 levels observed in Antwerp range between 3 and 35  ×  1015 molec cm−2, with a mean VCD of 17.4 ± 3.7  ×  1015 molec cm−2. In the Brussels area, smaller levels are found, ranging between 1 and 20  ×  1015 molec cm−2 and a mean VCD of 7.7 ± 2.1  ×  1015 molec cm−2. The overall errors on the retrieved NO2 VCDs are on average 21 and 28 % for the Antwerp and Brussels data sets. Low VCD retrievals are mainly limited by noise (1σ slant error), while high retrievals are mainly limited by systematic errors. Compared to coincident car mobile-DOAS measurements taken in Antwerp and Brussels, both data sets are in good agreement with correlation coefficients around 0.85 and slopes close to unity. APEX retrievals tend to be, on average, 12 and 6 % higher for Antwerp and Brussels, respectively. Results demonstrate that the NO2 distribution in an urban environment, and its fine-scale variability, can be mapped accurately with high spatial resolution and in a relatively short time frame, and the contributing emission sources can be resolved. High-resolution quantitative information about the atmospheric NO2 horizontal variability is currently rare, but can be very valuable for (air quality) studies at the urban scale.


2019 ◽  
Vol 11 (7) ◽  
pp. 753 ◽  
Author(s):  
Guodong Zhang ◽  
Hongmin Zhou ◽  
Changjing Wang ◽  
Huazhu Xue ◽  
Jindi Wang ◽  
...  

Continuous, long-term sequence, land surface albedo data have crucial significance for climate simulations and land surface process research. Sensors such as the Moderate-Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer (VIIRS) provide global albedo product data sets with a spatial resolution of 500 m over long time periods. There is demand for new high-resolution albedo data for regional applications. High-resolution observations are often unavailable due to cloud contamination, which makes it difficult to obtain time series albedo estimations. This paper proposes an “amalgamation albedo“ approach to generate daily land surface shortwave albedo with 30 m spatial resolution using Landsat data and the MODIS Bidirectional Reflectance Distribution Functions (BRDF)/Albedo product MCD43A3 (V006). Historical MODIS land surface albedo products were averaged to obtain an albedo estimation background, which was used to construct the albedo dynamic model . The Thematic Mapper (TM) albedo derived via direct estimation approach was then introduced to generate high spatial-temporal resolution albedo data based on the Ensemble Kalman Filter algorithm (EnKF). Estimation results were compared to field observations for cropland, deciduous broadleaf forest, evergreen needleleaf forest, grassland, and evergreen broadleaf forest domains. The results indicated that for all land cover types, the estimated albedos coincided with ground measurements at a root mean squared error (RMSE) of 0.0085–0.0152. The proposed algorithm was then applied to regional time series albedo estimation; the results indicated that it captured spatial and temporal variation patterns for each site. Taken together, our results suggest that the amalgamation albedo approach is a feasible solution to generate albedo data sets with high spatio-temporal resolution.


Sign in / Sign up

Export Citation Format

Share Document