Simultaneous reconstruction of 1-D susceptibility and conductivity from electromagnetic data

Geophysics ◽  
1999 ◽  
Vol 64 (1) ◽  
pp. 33-47 ◽  
Author(s):  
Zhiyi Zhang ◽  
Douglas W. Oldenburg

In this paper, we develop an inversion algorithm to simultaneously recover 1-D distributions of electric conductivity and magnetic susceptibility from a single data set. The earth is modeled as a series of homogeneous layers of known thickness with constant but unknown conductivities and susceptibilities. The medium of interest is illuminated by a horizontal circular loop source located above the surface of the earth. The secondary signals from the earth are received by a circular loop receiver located some distance from the source. The model objective function in the inversion, which we refer to as the cost function, is a weighted sum of model objective functions of conductivity and susceptibility. We minimize this cost function subject to the data constraints and show how the choice of weights for the model objective functions of conductivity and susceptibility affects the results of the inversion through 1-D synthetic examples. We also invert 3-D synthetic and field data. From these examples we conclude that simultaneous inversion of electromagnetic (EM) data can provide useful information about the conductivity and susceptibility distributions.

Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1292 ◽  
Author(s):  
Xingdong Li ◽  
Hewei Gao ◽  
Fusheng Zha ◽  
Jian Li ◽  
Yangwei Wang ◽  
...  

This paper is focused on designing a cost function of selecting a foothold for a physical quadruped robot walking on rough terrain. The quadruped robot is modeled with Denavit–Hartenberg (DH) parameters, and then a default foothold is defined based on the model. Time of Flight (TOF) camera is used to perceive terrain information and construct a 2.5D elevation map, on which the terrain features are detected. The cost function is defined as the weighted sum of several elements including terrain features and some features on the relative pose between the default foothold and other candidates. It is nearly impossible to hand-code the weight vector of the function, so the weights are learned using Supporting Vector Machine (SVM) techniques, and the training data set is generated from the 2.5D elevation map of a real terrain under the guidance of experts. Four candidate footholds around the default foothold are randomly sampled, and the expert gives the order of such four candidates by rotating and scaling the view for seeing clearly. Lastly, the learned cost function is used to select a suitable foothold and drive the quadruped robot to walk autonomously across the rough terrain with wooden steps. Comparing to the approach with the original standard static gait, the proposed cost function shows better performance.


Geophysics ◽  
2004 ◽  
Vol 69 (4) ◽  
pp. 898-908 ◽  
Author(s):  
Zhiyi Zhang ◽  
Liming Yu ◽  
Berthold Kriegshäuser ◽  
Lev Tabarovsky

We have developed a new algorithm that retrieves information about relative dip angle, relative azimuth angle, vertical resistivity, and horizontal resistivity from multicomponent EM induction logging data. To investigate how relative dip and azimuth angles affect multicomponent induction logging data, we performed a sensitivity analysis using an anisotropic whole space model. Based upon the sensitivity analysis, we designed a two‐step procedure to recover relative dip, relative azimuth, horizontal resistivity, and vertical resistivity. In the first step, the observed data are transformed into a new data set independent of the azimuth angle; a simultaneous inversion method recovers relative dip angle, vertical resistivity, and horizontal resistivity. In the second step, a 1D line search is performed to decide relative azimuth angle. Synthetic and field data tests indicate that the new inversion algorithm can extract information about relative dip and azimuth angles as well as the anisotropic resistivity structure from multicomponent induction loggingdata.


Geophysics ◽  
2011 ◽  
Vol 76 (3) ◽  
pp. F203-F214 ◽  
Author(s):  
A. Abubakar ◽  
M. Li ◽  
G. Pan ◽  
J. Liu ◽  
T. M. Habashy

We have developed an inversion algorithm for jointly inverting controlled-source electromagnetic (CSEM) data and magnetotelluric (MT) data. It is well known that CSEM and MT data provide complementary information about the subsurface resistivity distribution; hence, it is useful to derive earth resistivity models that simultaneously and consistently fit both data sets. Because we are dealing with a large-scale computational problem, one usually uses an iterative technique in which a predefined cost function is optimized. One of the issues of this simultaneous joint inversion approach is how to assign the relative weights on the CSEM and MT data in constructing the cost function. We propose a multiplicative cost function instead of the traditional additive one. This function does not require an a priori choice of the relative weights between these two data sets. It will adaptively put CSEM and MT data on equal footing in the inversion process. The inversion is accomplished with a regularized Gauss-Newton minimization scheme where the model parameters are forced to lie within their upper and lower bounds by a nonlinear transformation procedure. We use a line search scheme to enforce a reduction of the cost function at each iteration. We tested our joint inversion approach on synthetic and field data.


Geophysics ◽  
2008 ◽  
Vol 73 (4) ◽  
pp. F165-F177 ◽  
Author(s):  
A. Abubakar ◽  
T. M. Habashy ◽  
V. L. Druskin ◽  
L. Knizhnerman ◽  
D. Alumbaugh

We present 2.5D fast and rigorous forward and inversion algorithms for deep electromagnetic (EM) applications that include crosswell and controlled-source EM measurements. The forward algorithm is based on a finite-difference approach in which a multifrontal LU decomposition algorithm simulates multisource experiments at nearly the cost of simulating one single-source experiment for each frequency of operation. When the size of the linear system of equations is large, the use of this noniterative solver is impractical. Hence, we use the optimal grid technique to limit the number of unknowns in the forward problem. The inversion algorithm employs a regularized Gauss-Newton minimization approach with a multiplicative cost function. By using this multiplicative cost function, we do not need a priori data to determine the so-called regularization parameter in the optimization process, making the algorithm fully automated. The algorithm is equipped with two regularization cost functions that allow us to reconstruct either a smooth or a sharp conductivity image. To increase the robustness of the algorithm, we also constrain the minimization and use a line-search approach to guarantee the reduction of the cost function after each iteration. To demonstrate the pros and cons of the algorithm, we present synthetic and field data inversion results for crosswell and controlled-source EM measurements.


Geophysics ◽  
1997 ◽  
Vol 62 (2) ◽  
pp. 436-448 ◽  
Author(s):  
Yuval ◽  
Douglas W. Oldenburg

We develop a process to estimate Cole‐Cole parameters from time‐domain induced polarization (IP) surveys carried out over a nonuniform earth. The recovery of parameters takes the following steps. We first divide the earth into rectangular cells and assume that the Cole‐Cole decay parameters [Formula: see text] and c constant in each cell. Apparent chargeability data measured at times [Formula: see text] after the cessation of the input current are inverted using a 2-D inversion algorithm to recover the intrinsic chargeability structure [Formula: see text] for k = 1, L, where L is the number of time channels measured. When carrying out this inversion, it is necessary to introduce a normalization criterion so that the inversion outputs from the different time channels can be meshingfully combined. The L chargeability structures provide L estimates of the chargeability decay curve for each cell. The desired intrinsic Cole‐Cole parameters are recovered from these decay curves using a very fast simulated annealing (VFSA) algorithm. Application of the process in all cells provides interpretation maps of [Formula: see text], τ(x,z), and c(x, z). Our analysis is demonstrated on a synthetic example and is implemented on a field data set. The application of the process to field data yields reasonable results.


Author(s):  
Andrew Kurzawski ◽  
Ofodike A. Ezekoye

The heat-release rate (HRR) of a burning item is key to understanding the thermal effects of a fire on its surroundings. It is, perhaps, the most important variable used to characterize a burning fuel packet and is defined as the rate of energy released by the fire. HRR is typically determined using a gas measurement calorimetry method. In this study, an inversion algorithm is presented for conducting calorimeter on fires with unknown HRRs located in a compartment. The algorithm compares predictions of a forward model with observed heat fluxes from synthetically generated data sets to determine the HRR that minimizes a cost function. The effects of tuning a weighting parameter in the cost function and the issues associated with two different forward models of a compartment fire are examined.


2019 ◽  
Vol 220 (1) ◽  
pp. 308-322 ◽  
Author(s):  
Barbara Romanowicz ◽  
Li-Wei Chen ◽  
Scott W French

SUMMARY Accurate synthetic seismic wavefields can now be computed in 3-D earth models using the spectral element method (SEM), which helps improve resolution in full waveform global tomography. However, computational costs are still a challenge. These costs can be reduced by implementing a source stacking method, in which multiple earthquake sources are simultaneously triggered in only one teleseismic SEM simulation. One drawback of this approach is the perceived loss of resolution at depth, in particular because high-amplitude fundamental mode surface waves dominate the summed waveforms, without the possibility of windowing and weighting as in conventional waveform tomography. This can be addressed by redefining the cost-function and computing the cross-correlation wavefield between pairs of stations before each inversion iteration. While the Green’s function between the two stations is not reconstructed as well as in the case of ambient noise tomography, where sources are distributed more uniformly around the globe, this is not a drawback, since the same processing is applied to the 3-D synthetics and to the data, and the source parameters are known to a good approximation. By doing so, we can separate time windows with large energy arrivals corresponding to fundamental mode surface waves. This opens the possibility of designing a weighting scheme to bring out the contribution of overtones and body waves. It also makes it possible to balance the contributions of frequently sampled paths versus rarely sampled ones, as in more conventional tomography. Here we present the results of proof of concept testing of such an approach for a synthetic 3-component long period waveform data set (periods longer than 60 s), computed for 273 globally distributed events in a simple toy 3-D radially anisotropic upper mantle model which contains shear wave anomalies at different scales. We compare the results of inversion of 10 000 s long stacked time-series, starting from a 1-D model, using source stacked waveforms and station-pair cross-correlations of these stacked waveforms in the definition of the cost function. We compute the gradient and the Hessian using normal mode perturbation theory, which avoids the problem of cross-talk encountered when forming the gradient using an adjoint approach. We perform inversions with and without realistic noise added and show that the model can be recovered equally well using one or the other cost function. The proposed approach is computationally very efficient. While application to more realistic synthetic data sets is beyond the scope of this paper, as well as to real data, since that requires additional steps to account for such issues as missing data, we illustrate how this methodology can help inform first order questions such as model resolution in the presence of noise, and trade-offs between different physical parameters (anisotropy, attenuation, crustal structure, etc.) that would be computationally very costly to address adequately, when using conventional full waveform tomography based on single-event wavefield computations.


2015 ◽  
Vol 14 (4) ◽  
pp. 165-181 ◽  
Author(s):  
Sarah Dudenhöffer ◽  
Christian Dormann

Abstract. The purpose of this study was to replicate the dimensions of the customer-related social stressors (CSS) concept across service jobs, to investigate their consequences for service providers’ well-being, and to examine emotional dissonance as mediator. Data of 20 studies comprising of different service jobs (N = 4,199) were integrated into a single data set and meta-analyzed. Confirmatory factor analyses and explorative principal component analysis confirmed four CSS scales: disproportionate expectations, verbal aggression, ambiguous expectations, disliked customers. These CSS scales were associated with burnout and job satisfaction. Most of the effects were partially mediated by emotional dissonance. Further analyses revealed that differences among jobs exist with regard to the factor solution. However, associations between CSS and outcomes are mainly invariant across service jobs.


2020 ◽  
pp. 1-14
Author(s):  
Esraa Hassan ◽  
Noha A. Hikal ◽  
Samir Elmuogy

Nowadays, Coronavirus (COVID-19) considered one of the most critical pandemics in the earth. This is due its ability to spread rapidly between humans as well as animals. COVID_19 expected to outbreak around the world, around 70 % of the earth population might infected with COVID-19 in the incoming years. Therefore, an accurate and efficient diagnostic tool is highly required, which the main objective of our study. Manual classification was mainly used to detect different diseases, but it took too much time in addition to the probability of human errors. Automatic image classification reduces doctors diagnostic time, which could save human’s life. We propose an automatic classification architecture based on deep neural network called Worried Deep Neural Network (WDNN) model with transfer learning. Comparative analysis reveals that the proposed WDNN model outperforms by using three pre-training models: InceptionV3, ResNet50, and VGG19 in terms of various performance metrics. Due to the shortage of COVID-19 data set, data augmentation was used to increase the number of images in the positive class, then normalization used to make all images have the same size. Experimentation is done on COVID-19 dataset collected from different cases with total 2623 where (1573 training,524 validation,524 test). Our proposed model achieved 99,046, 98,684, 99,119, 98,90 In terms of Accuracy, precision, Recall, F-score, respectively. The results are compared with both the traditional machine learning methods and those using Convolutional Neural Networks (CNNs). The results demonstrate the ability of our classification model to use as an alternative of the current diagnostic tool.


2021 ◽  
Vol 11 (2) ◽  
pp. 850
Author(s):  
Dokkyun Yi ◽  
Sangmin Ji ◽  
Jieun Park

Artificial intelligence (AI) is achieved by optimizing the cost function constructed from learning data. Changing the parameters in the cost function is an AI learning process (or AI learning for convenience). If AI learning is well performed, then the value of the cost function is the global minimum. In order to obtain the well-learned AI learning, the parameter should be no change in the value of the cost function at the global minimum. One useful optimization method is the momentum method; however, the momentum method has difficulty stopping the parameter when the value of the cost function satisfies the global minimum (non-stop problem). The proposed method is based on the momentum method. In order to solve the non-stop problem of the momentum method, we use the value of the cost function to our method. Therefore, as the learning method processes, the mechanism in our method reduces the amount of change in the parameter by the effect of the value of the cost function. We verified the method through proof of convergence and numerical experiments with existing methods to ensure that the learning works well.


Sign in / Sign up

Export Citation Format

Share Document