scholarly journals Image appraisal for 2-D and 3-D electromagnetic inversion

Geophysics ◽  
2000 ◽  
Vol 65 (5) ◽  
pp. 1455-1467 ◽  
Author(s):  
David L. Alumbaugh ◽  
Gregory A. Newman

Linearized methods are presented for appraising resolution and parameter accuracy in images generated with 2-D and 3-D nonlinear electromagnetic (EM) inversion schemes. When direct matrix inversion is used, the model resolution and a posteriori model covariance matrices can be calculated readily. By analyzing individual columns of the model resolution matrix, the spatial variation of the resolution in the horizontal and vertical directions can be estimated empirically. Plotting the diagonal of the model covariance matrix provides an estimate of how errors in the inversion process, such as data noise and incorrect a priori assumptions, map into parameter error and thus provides valuable information about the uniqueness of the resulting image. Methods are also derived for image appraisal when the iterative conjugate gradient technique is applied to solve the inverse. An iterative statistical method yields accurate estimates of the model covariance matrix as long as enough iterations are used. Although determining the entire model resolution matrix in a similar manner is computationally prohibitive, individual columns of this matrix can be determined. Thus, the spatial variation in image resolution can be determined by calculating the columns of this matrix for key points in the image domain and then interpolating between. Examples of the image analysis techniques are provided on 2-D and 3-D synthetic cross‐well EM data sets as well as a field data set collected at Lost Hills oil field in central California.

2021 ◽  
Author(s):  
Kezia Lange ◽  
Andreas C. Meier ◽  
Michel Van Roozendael ◽  
Thomas Wagner ◽  
Thomas Ruhtz ◽  
...  

<p>Airborne imaging DOAS and ground-based stationary and mobile DOAS measurements were conducted during the ESA funded S5P-VAL-DE-Ruhr campaign in September 2020 in the Ruhr area. The Ruhr area is located in Western Germany and is a pollution hotspot in Europe with urban character as well as large industrial emitters. The measurements are used to validate data from the Sentinel-5P TROPOspheric Monitoring Instrument (TROPOMI) with focus on the NO<sub>2</sub> tropospheric vertical column product.</p><p>Seven flights were performed with the airborne imaging DOAS instrument, AirMAP, providing continuous maps of NO<sub>2</sub> in the layers below the aircraft. These flights cover many S5P ground pixels within an area of about 40 km side length and were accompanied by ground-based stationary measurements and three mobile car DOAS instruments. Stationary measurements were conducted by two Pandora, two zenith-sky and two MAX-DOAS instruments distributed over three target areas, partly as long-term measurements over a one-year period.</p><p>Airborne and ground-based measurements were compared to evaluate the representativeness of the measurements in time and space. With a resolution of about 100 x 30 m<sup>2</sup>, the AirMAP data creates a link between the ground-based and the TROPOMI measurements with a resolution of 3.5 x 5.5 km<sup>2</sup> and is therefore well suited to validate TROPOMI's tropospheric NO<sub>2</sub> vertical column.</p><p>The measurements on the seven flight days show strong variability depending on the different target areas, the weekday and meteorological conditions. We found an overall low bias of the TROPOMI operational NO<sub>2</sub> data for all three target areas but with varying magnitude for different days. The campaign data set is compared to custom TROPOMI NO<sub>2</sub> products, using different auxiliary data, such as albedo or a priori vertical profiles to evaluate the influence on the TROPOMI data product. Analyzing and comparing the different data sets provides more insight into the high spatial and temporal heterogeneity in NO<sub>2</sub> and its impact on satellite observations and their validation.</p>


Geophysics ◽  
2019 ◽  
Vol 84 (5) ◽  
pp. E293-E299
Author(s):  
Jorlivan L. Correa ◽  
Paulo T. L. Menezes

Synthetic data provided by geoelectric earth models are a powerful tool to evaluate a priori a controlled-source electromagnetic (CSEM) workflow effectiveness. Marlim R3D (MR3D) is an open-source complex and realistic geoelectric model for CSEM simulations of the postsalt turbiditic reservoirs at the Brazilian offshore margin. We have developed a 3D CSEM finite-difference time-domain forward study to generate the full-azimuth CSEM data set for the MR3D earth model. To that end, we fabricated a full-azimuth survey with 45 towlines striking the north–south and east–west directions over a total of 500 receivers evenly spaced at 1 km intervals along the rugged seafloor of the MR3D model. To correctly represent the thin, disconnected, and complex geometries of the studied reservoirs, we have built a finely discretized mesh of [Formula: see text] cells leading to a large mesh with a total of approximately 90 million cells. We computed the six electromagnetic field components (Ex, Ey, Ez, Hx, Hy, and Hz) at six frequencies in the range of 0.125–1.25 Hz. In our efforts to mimic noise in real CSEM data, we summed to the data a multiplicative noise with a 1% standard deviation. Both CSEM data sets (noise free and noise added), with inline and broadside geometries, are distributed for research or commercial use, under the Creative Common License, at the Zenodo platform.


1999 ◽  
Vol 09 (03) ◽  
pp. 195-202 ◽  
Author(s):  
JOSÉ ALFREDO FERREIRA COSTA ◽  
MÁRCIO LUIZ DE ANDRADE NETTO

Determining the structure of data without prior knowledge of the number of clusters or any information about their composition is a problem of interest in many fields, such as image analysis, astrophysics, biology, etc. Partitioning a set of n patterns in a p-dimensional feature space must be done such that those in a given cluster are more similar to each other than the rest. As there are approximately [Formula: see text] possible ways of partitioning the patterns among K clusters, finding the best solution is very hard when n is large. The search space is increased when we have no a priori number of partitions. Although the self-organizing feature map (SOM) can be used to visualize clusters, the automation of knowledge discovery by SOM is a difficult task. This paper proposes region-based image processing methods to post-processing the U-matrix obtained after the unsupervised learning performed by SOM. Mathematical morphology is applied to identify regions of neurons that are similar. The number of regions and their labels are automatically found and they are related to the number of clusters in a multivariate data set. New data can be classified by labeling it according to the best match neuron. Simulations using data sets drawn from finite mixtures of p-variate normal densities are presented as well as related advantages and drawbacks of the method.


2012 ◽  
Vol 52 (No. 4) ◽  
pp. 188-196 ◽  
Author(s):  
Y. Lei ◽  
S. Y Zhang

Forestmodellers have long faced the problem of selecting an appropriate mathematical model to describe tree ontogenetic or size-shape empirical relationships for tree species. A common practice is to develop many models (or a model pool) that include different functional forms, and then to select the most appropriate one for a given data set. However, this process may impose subjective restrictions on the functional form. In this process, little attention is paid to the features (e.g. asymptote and inflection point rather than asymptote and nonasymptote) of different functional forms, and to the intrinsic curve of a given data set. In order to find a better way of comparing and selecting the growth models, this paper describes and analyses the characteristics of the Schnute model. This model has both flexibility and versatility that have not been used in forestry. In this study, the Schnute model was applied to different data sets of selected forest species to determine their functional forms. The results indicate that the model shows some desirable properties for the examined data sets, and allows for discerning the different intrinsic curve shapes such as sigmoid, concave and other curve shapes. Since no suitable functional form for a given data set is usually known prior to the comparison of candidate models, it is recommended that the Schnute model be used as the first step to determine an appropriate functional form of the data set under investigation in order to avoid using a functional form a priori.


2014 ◽  
Vol 26 (5) ◽  
pp. 907-919 ◽  
Author(s):  
Abd-Krim Seghouane ◽  
Yousef Saad

This letter proposes an algorithm for linear whitening that minimizes the mean squared error between the original and whitened data without using the truncated eigendecomposition (ED) of the covariance matrix of the original data. This algorithm uses Lanczos vectors to accurately approximate the major eigenvectors and eigenvalues of the covariance matrix of the original data. The major advantage of the proposed whitening approach is its low computational cost when compared with that of the truncated ED. This gain comes without sacrificing accuracy, as illustrated with an experiment of whitening a high-dimensional fMRI data set.


Author(s):  
Manuel Haussmann ◽  
Fred Hamprecht ◽  
Melih Kandemir

Model selection is treated as a standard performance boosting step in many machine learning applications. Once all other properties of a learning problem are fixed, the model is selected by grid search on a held-out validation set. This is strictly inapplicable to active learning. Within the standardized workflow, the acquisition function is chosen among available heuristics a priori, and its success is observed only after the labeling budget is already exhausted. More importantly, none of the earlier studies report a unique consistently successful acquisition heuristic to the extent to stand out as the unique best choice. We present a method to break this vicious circle by defining the acquisition function as a learning predictor and training it by reinforcement feedback collected from each labeling round. As active learning is a scarce data regime, we bootstrap from a well-known heuristic that filters the bulk of data points on which all heuristics would agree, and learn a policy to warp the top portion of this ranking in the most beneficial way for the character of a specific data distribution. Our system consists of a Bayesian neural net, the predictor, a bootstrap acquisition function, a probabilistic state definition, and another Bayesian policy network that can effectively incorporate this input distribution. We observe on three benchmark data sets that our method always manages to either invent a new superior acquisition function or to adapt itself to the a priori unknown best performing heuristic for each specific data set.


Geophysics ◽  
1996 ◽  
Vol 61 (2) ◽  
pp. 538-548 ◽  
Author(s):  
Douglas J. LaBrecque ◽  
Michela Miletto ◽  
William Daily ◽  
Aberlardo Ramirez ◽  
Earle Owen

An Occam’s inversion algorithm for crosshole resistivity data that uses a finite‐element method forward solution is discussed. For the inverse algorithm, the earth is discretized into a series of parameter blocks, each containing one or more elements. The Occam’s inversion finds the smoothest 2-D model for which the Chi‐squared statistic equals an a priori value. Synthetic model data are used to show the effects of noise and noise estimates on the resulting 2-D resistivity images. Resolution of the images decreases with increasing noise. The reconstructions are underdetermined so that at low noise levels the images converge to an asymptotic image, not the true geoelectrical section. If the estimated standard deviation is too low, the algorithm cannot achieve an adequate data fit, the resulting image becomes rough, and irregular artifacts start to appear. When the estimated standard deviation is larger than the correct value, the resolution decreases substantially (the image is too smooth). The same effects are demonstrated for field data from a site near Livermore, California. However, when the correct noise values are known, the Occam’s results are independent of the discretization used. A case history of monitoring at an enhanced oil recovery site is used to illustrate problems in comparing successive images over time from a site where the noise level changes. In this case, changes in image resolution can be misinterpreted as actual geoelectrical changes. One solution to this problem is to perform smoothest, but non‐Occam’s, inversion on later data sets using parameters found from the background data set.


2015 ◽  
Vol 71 (5) ◽  
pp. 1051-1058 ◽  
Author(s):  
Maxim V. Petoukhov ◽  
Dmitri I. Svergun

A novel approach is presented for ana prioriassessment of the ambiguity associated with spherically averaged single-particle scattering. The approach is of broad interest to the structural biology community, allowing the rapid and model-independent assessment of the inherent non-uniqueness of three-dimensional shape reconstruction from scattering experiments on solutions of biological macromolecules. One-dimensional scattering curves recorded from monodisperse systems are nowadays routinely utilized to generate low-resolution particle shapes, but the potential ambiguity of such reconstructions remains a major issue. At present, the (non)uniqueness can only be assessed bya posterioricomparison and averaging of repetitive Monte Carlo-based shape-determination runs. The newa prioriambiguity measure is based on the number of distinct shape categories compatible with a given data set. For this purpose, a comprehensive library of over 14 000 shape topologies has been generated containing up to seven beads closely packed on a hexagonal grid. The computed scattering curves rescaled to keep only the shape topology rather than the overall size information provide a `scattering map' of this set of shapes. For a given scattering data set, one rapidly obtains the number of neighbours in the map and the associated shape topologies such that in addition to providing a quantitative ambiguity measure the algorithm may also serve as an alternative shape-analysis tool. The approach has been validated in model calculations on geometrical bodies and its usefulness is further demonstrated on a number of experimental X-ray scattering data sets from proteins in solution. A quantitative ambiguity score (a-score) is introduced to provide immediate and convenient guidance to the user on the uniqueness of theab initioshape reconstruction from the given data set.


2011 ◽  
Vol 4 (5) ◽  
pp. 775-793 ◽  
Author(s):  
S. M. Illingworth ◽  
J. J. Remedios ◽  
H. Boesch ◽  
S.-P. Ho ◽  
D. P. Edwards ◽  
...  

Abstract. Observations of atmospheric carbon monoxide (CO) can only be made on continental and global scales by remote sensing instruments situated in space. One such instrument is the Infrared Atmospheric Sounding Interferometer (IASI), producing spectrally resolved, top-of-atmosphere radiance measurements from which CO vertical layers and total columns can be retrieved. This paper presents a technique for intercomparisons of satellite data with low vertical resolution. The example in the paper also generates the first intercomparison between an IASI CO data set, in this case that produced by the University of Leicester IASI Retrieval Scheme (ULIRS), and the V3 and V4 operationally retrieved CO products from the Measurements Of Pollution In The Troposphere (MOPITT) instrument. The comparison is performed for a localised region of Africa, primarily for an ocean day-time configuration, in order to develop the technique for instrument intercomparison in a region with well defined a priori. By comparing both the standard data and a special version of MOPITT data retrieved using the ULIRS a priori for CO, it is shown that standard intercomparisons of CO are strongly affected by the differing a priori data of the retrievals, and by the differing sensitivities of the two instruments. In particular, the differing a priori profiles for MOPITT V3 and V4 data result in systematic retrieved profile changes as expected. An application of averaging kernels is used to derive a difference quantity which is much less affected by smoothing error, and hence more sensitive to systematic error. These conclusions are confirmed by simulations with model profiles for the same region. This technique is used to show that for the data that has been processed the systematic bias between MOPITT V4 and ULIRS IASI data, at MOPITT vertical resolution, is less than 7 % for the comparison data set, and on average appears to be less than 4 %. The results of this study indicate that intercomparisons of satellite data sets with low vertical resolution should ideally be performed with: retrievals using a common a priori appropriate to the geographic region studied; the application of averaging kernels to compute difference quantities with reduced a priori influence; and a comparison with simulated differences using model profiles for the target gas in the region.


2017 ◽  
Vol 59 (3) ◽  
Author(s):  
Felix Beier ◽  
Kai-Uwe Sattler

AbstractA lot of different indexes have been developed for accelerating search operations on large data sets. Search trees, representing the most prominent class, are ubiquitous in database management systems but are also widely used in non-DBMS applications. An approach for lowering the implementation complexity of these structures are index frameworks like generalized search trees (GiST). Common data management operations are implemented within the framework which can be specialized by data organization and evaluation strategies in order to model the actual index type. These frameworks are particularly useful in scientific and engineering applications where characteristics of the underlying data set are not known a priori and a lot of prototyping is required in order to find suitable index structures for the workload.However, existing frameworks only abstract data organization and data maintenance aspects to model different index families, while traversal operations for executing searches are implemented serially. This paper presents an approach for enabling parallel processing in GiST in order to leverage the full power of parallel processor architectures for different index implementations at once. Further, results of a prototypical implementation are evaluated on a hybrid CPU/GPU system architecture to verify the applicability of this generic framework idea on different hardware platforms.


Sign in / Sign up

Export Citation Format

Share Document