Inversion of magnetotelluric data using a practical inverse scattering formulation

Geophysics ◽  
1986 ◽  
Vol 51 (2) ◽  
pp. 383-395 ◽  
Author(s):  
Kenneth P. Whittall ◽  
D. W. Oldenburg

We present a flexible, one‐dimensional magnetotelluric (MT) inversion algorithm based on inverse scattering theory. The algorithm easily generates different classes of conductivity‐depth profiles so the interpreter may choose models that satisfy any external geologic or geophysical constraints. The two‐stage process is based on the work of Weidelt. The first stage uses the MT frequency‐domain data to construct an impulse response analogous to a deconvolved seismogram with or without a free‐surface assumption. Since this is a linear problem (a Laplace transform), numerous impulse responses may be generated by linear inverse techniques which handle data errors robustly. We minimize four norms of the impulse response in order to construct varied classes of limited‐structure earth models. We choose such models to prevent overinterpreting the limited number of inaccurate MT observations. The second stage of the algorithm maps the impulse response to the conductivity model using any of four Fredholm integral equations of the second kind. We evaluate the performance of each of the four mappings and recommend the Burridge and Gopinath‐Sondhi formulations. We also evaluate three approximations to the second‐stage equations. These approximations are fast and easy to implement on small computers. We find the one which includes first‐order multiple reflections to be the most accurate.

Geophysics ◽  
1982 ◽  
Vol 47 (5) ◽  
pp. 757-770 ◽  
Author(s):  
A. Bamberger ◽  
G. Chavent ◽  
Ch. Hemon ◽  
P. Lailly

The well‐known instability of Kunetz’s (1963) inversion algorithm can be explained by the progressive manner in which the calculations are done (descending from the surface) and by the fact that completely different impedances can yield indistinguishable synthetic seismograms. Those difficulties can be overcome by using an iterative algorithm for the inversion of the one‐dimensional (1-D) wave equation, together with a stabilizing constraint on the sums of the jumps of the desired impedance. For computational efficiency, the synthetic seismogram is computed by the method of characteristics, and the gradient of the error criterion is computed by optimal control techniques (adjoint state equation). The numerical results on simulated data confirm the expected stability of the algorithm in the presence of measurement noise (tests include noise levels of 50 percent). The inversion of two field sections demonstrates the practical feasibility of the method and the importance of taking into account all internal as well as external multiple reflections. Reflection coefficients obtained by this method show an excellent agreement with well‐log data in a case where standard estimation techniques [deconvolution of common‐depth‐point (CDP) stacked and normal‐moveout (NMO) correction section] failed.


Geophysics ◽  
1988 ◽  
Vol 53 (1) ◽  
pp. 104-117 ◽  
Author(s):  
S. Levv ◽  
D. Oldenburg ◽  
J. Wang

A linear programming approach is developed to construct a pseudo‐impulse response for magnetotelluric (MT) data. The constructed time function is made up of discrete pulses whose amplitudes depend upon the electromagnetic reflection and transmission coefficients at various layer interfaces. The arrival time of an individual pulse corresponds to the time for a reference signal to travel a particular raypath from the surface to a reflector and back. The display of the impulse responses recovered from many stations produces an MT reflectivity section which is analogous to the image ray section regularly interpreted in reflection seismology. Application of linear programming inversion to one‐dimensional conductivity models shows the viability of the method and validates the physical interpretation of the pseudo‐impulse response function. Using a number of simple two‐dimensional geologic models, we show that a line of MT stations acquired perpendicular to strike produces a reflectivity section which is an image of the explored target. The interpretation of the MT image section follows the conventional guidelines used in reflection seismology; features such as traveltime pullup, primary and multiple reflections, and diffraction events are evident on the final section.


Geophysics ◽  
2012 ◽  
Vol 77 (4) ◽  
pp. E301-E315 ◽  
Author(s):  
Thomas Kalscheuer ◽  
Juliane Hübert ◽  
Alexey Kuvshinov ◽  
Tobias Lochbühler ◽  
Laust B. Pedersen

Magnetotelluric (MT), radiomagnetotelluric (RMT), and, in particular, controlled-source audiomagnetotelluric (CSAMT) data are often heavily distorted by near-surface inhomogeneities. We developed a novel scheme to invert MT, RMT, and CSAMT data in the form of scalar or tensorial impedances and vertical magnetic transfer functions simultaneously for layer resistivities and electric and magnetic galvanic distortion parameters. The inversion scheme uses smoothness constraints to regularize layer resistivities and either Marquardt-Levenberg damping or the minimum-solution length criterion to regularize distortion parameters. A depth of investigation range is estimated by comparing layered model sections derived from first- and second-order smoothness constraints. Synthetic examples demonstrate that earth models are reconstructed properly for distorted and undistorted tensorial CSAMT data. In the inversion of scalar CSAMT data, such as the determinant impedance or individual tensor elements, the reduced number of transfer functions inevitably leads to increased ambiguity for distortion parameters. As a consequence of this ambiguity for scalar data, distortion parameters often grow over the iterations to unrealistic absolute values when regularized with the Marquardt-Levenberg scheme. Essentially, compensating relationships between terms containing electric and/or magnetic distortion are used in this growth. In a regularization with the minimum solution length criterion, the distortion parameters converge into a stable configuration after several iterations and attain reasonable values. The inversion algorithm was applied to a CSAMT field data set collected along a profile over a tunnel construction site at Hallandsåsen, Sweden. To avoid erroneous inverse models from strong anthropogenic effects on the data, two scalar transfer functions (one scalar impedance and one scalar vertical magnetic transfer function) were selected for inversion. Compared with a regularization of distortion parameters with the Marquardt-Levenberg method, the minimum-solution length criterion yielded smaller absolute values of distortion parameters and a horizontally more homogeneous distribution of electrical conductivity.


2010 ◽  
Vol 46 (4) ◽  
pp. 777-783
Author(s):  
Antônio Edson de Souza Lucena ◽  
Divaldo de Almeida Sampaio ◽  
Ednaldo Rosas da Silva ◽  
Virgínia Florêncio de Paiva ◽  
Ana Cláudia Santiago ◽  
...  

Highly purified intravenous immunoglobulin G concentrate (IV IgG) was produced with the use of polyethylene glycol associated to a single-stage precipitation by ethanol, instead of the classic Cohn-Oncley process, which employs cold alcohol as the precipitating agent, in a three-stage process. Precipitation of crude fraction containing more than 95% of immunoglobulin G was performed by liquid chromatography with a cation exchanger, CM-Sepharose, as a stationary phase. During the process, the product was subjected to two-stage viral inactivation. The first stage was performed by the action of sodium caprylate, 30 mM at pH 5.1+/- 0.1, and the second stage was performed by the action of a solvent-detergent mixture. The finished product was formulated at 5% with 10% sucralose as the stabilizing agent. The process yields 3.3g of IgG/liter of plasma. The finished product analysis showed an anti-complementary activity lower than 1CH50. Polymer and aggregate percent levels were lower than 3% in the five batches studied. The analysis of neutralizing capacity showed the presence of antibacterial and antiviral antibodies in at least three times higher concentrations than the levels found in source plasma. The finished product fulfilled all purity requirements stated in the 4th edition of the European pharmacopeia.


2017 ◽  
Vol 23 (1) ◽  
pp. 423-429
Author(s):  
Dan Popescu ◽  
Cristina State ◽  
Livia Toanca ◽  
Ioana Pavel

Abstract Technological social and cultural changes generated by the digital age have a significant impact on both individual and society as a whole [1]. Is the context in which our research aimed at revealing the extent to which SMEs in our country are prepared to cope with these changes and can adapt to an environment increasingly turbulent and unpredictable [2]. Based on the three hypotheses of our scientific approach, the method used was the quota for proportional distribution by counties, respectively the optimal-layered model for the distribution by fields of activity. As a means of investigation we used a questionnaire with 26 questions answered by 598 SMEs, the purpose being represented by identifying, on the one hand, the used methods of strategic management and, on the other hand, the uptake of digital means by them. The responses from the distribution of the questionnaire were analyzed by various statistical and econometric methods. In a first stage we used descriptive statistics to identify peculiarities of respondents to compare different homogeneous groups. In the second stage of analysis to determine statistical deductive conclusions, we used the analysis of variance, correlation and linear regression and ANOVA using SPSS software for Windows 16.0. Following validation of the research hypotheses, in the end of the work we formulated a series of proposals to improve the strategic management of SMEs in Romania in the digital age.


2012 ◽  
Vol 27 (2) ◽  
pp. 187-219 ◽  
Author(s):  
Shu-Heng Chen ◽  
Chia-Ling Chang ◽  
Ye-Rong Du

AbstractThis paper reviews the development of agent-based (computational) economics (ACE) from an econometrics viewpoint. The review comprises three stages, characterizing the past, the present, and the future of this development. The first two stages can be interpreted as an attempt to build the econometric foundation of ACE, and, through that, enrich its empirical content. The second stage may then invoke a reverse reflection on the possible agent-based foundation of econometrics. While ACE modeling has been applied to different branches of economics, the one, and probably the only one, which is able to provide evidence of this three-stage development is finance or financial economics. We will, therefore, focus our review only on the literature of agent-based computational finance, or, more specifically, the agent-based modeling of financial markets.


2019 ◽  
Vol 19 (1) ◽  
pp. 26-35 ◽  
Author(s):  
Xuan Luo ◽  
Gaoming Jiang ◽  
Honglian Cong

Abstract This paper focuses on the better performance between the garment simulation result and the simulation speed. For simplicity and clarity, a notation “PART” is defined to indicate the areas between the garment and the human body satisfying some constraints. The discrete mechanical model can be achieved by the two-stage process. In the first stage, the garment can be divided into several PARTs constrained by the distance. In the second stage, the mechanical model of each PART is formulated with a mathematical expression. Thus, the mechanical model of the garment can be obtained. Through changing the constrained distance, the simulation result and the simulation speed can be observed. From the variable distance, a desired value can be chosen for an optimal value. The results of simulations and experiments demonstrate that the better performance can be achieved at a higher speed by saving runtime with the acceptable simulation results and the efficiency of the proposed scheme can be verified as well.


Author(s):  
Angel L. Meroño-Cerdan ◽  
Pedro Soto-Acosta ◽  
Carolina Lopez-Nicolas

This study seeks to assess the impact of collaborative technologies on innovation at the firm level. Collaborative technologies’ influence on innovation is considered here as a multi-stage process that starts at adoption and extends to use. Thus, the effect of collaborative technologies on innovation is examined not only directly, the simple presence of collaborative technologies, but also based on actual collaborative technologies’ use. Given the fact that firms can use this technology for different purposes, collaborative technologies’ use is measured according to three orientations: e-information, e-communication and e-workflow. To achieve these objectives, a research model is developed for assessing, on the one hand, the impact of the adoption and use of collaborative technologies on innovation and, on the other hand, the relationship between adoption and use of collaborative technologies. The research model is tested using a dataset of 310 Spanish SMEs. The results showed that collaborative technologies’ adoption is positively related to innovation. Also, as hypothesized, distinct collaborative technologies were found to be associated to different uses. In addition, the study found that while e-information had a positive and significant impact on innovation, e-communication and e-workflow did not.


Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5812
Author(s):  
Wentian Wang ◽  
Sixin Liu ◽  
Xuzhang Shen ◽  
Wenjun Zheng

The directional borehole radar can accurately locate and image the geological target around the borehole, which overcomes the shortcomings that the conventional borehole radar can only detect the depth of the target and the distance from the borehole. The directional borehole radar under consideration consists of a transmitting antenna and four receiving antennas equally distributed on the ring in the borehole. The nonuniformity caused by the borehole and sonde, as well as the mutual coupling among the four receiving antennas, will have a serious impact on the received signal and then cause interference to the azimuth recognition for the targets. In this paper, Finite difference time domain (FDTD), including the subgrid, is applied to study these effects and interferences, and the influence of borehole, sonde, and mutual coupling among the receiving antennas is found. The results show that, without considering the sonde and the fluid in the borehole, the one transmitting and one receiving borehole radar system does not have resonance, but the wave pattern of the reflected wave will have obvious distortion. For the four receiving antennas of the borehole radar system, there is obvious resonance, which is caused by the multiple reflections between the receiving antennas. However, when the fluid in the borehole is water and the relative permittivity of the sonde is low to a certain extent, the resonance disappears; that is, the generation of resonance requires a large relative permittivity material between the receiving antennas. When the influence of the sonde is considered, the resonance disappears because the relative permittivity of the sonde is low, which makes the propagation speed of the electromagnetic wave between the antennas accelerate and lose the conditions for resonance. In addition, the diameters of the sonde and the circular array of the receiving antennas can affect the received signal: the higher the diameter of the sonde and the higher the diameter of the circular array are, the better the differentiation of the received signal. The development of the research provides scientific guidance for the design and application of borehole radar in the future.


1988 ◽  
Vol 20 (1) ◽  
pp. 143-147 ◽  
Author(s):  
T. Welander

A multi-stage process for treatment of CTMP effluent has been developed. It comprises primary settling and four biological stages. The concentration of hydrogen peroxide, a compound which is toxic to anaerobic bacteria, is reduced in the first biological stage by means of the biocatalytic action of biomass that is recycled from the following acidogenic and/or aerobic stages. The second stage is an acidogenic stage, in which volatile fatty acids are formed and remaining peroxide is decomposed. A mixture of aluminum, iron and calcium salts is added to the effluent in order to detoxify compounds which are toxic to methanogenic bacteria. The main part of the COD and BOD removal takes place in the third stage, the methanogenic stage, after which follows an aerobic stage for polishing and removal of bad-smelling compounds. The COD and BOD7 removals in the anaerobic part of the process are 60 and 90 %, respectively, and the methane yield is 0.20-0.25 Nm3/kg COD removed.


Sign in / Sign up

Export Citation Format

Share Document