Apparent-magnetization mapping using entropic regularization

Geophysics ◽  
2010 ◽  
Vol 75 (2) ◽  
pp. L39-L50 ◽  
Author(s):  
João B. Silva ◽  
Suzan S. Vasconcelos ◽  
Valeria C. Barbosa

A new apparent-magnetization mapping method on the horizontal plane combines minimization of first-order entropy with maximization of zeroth-order entropy of the estimated magnetization. The interpretation model is a grid of vertical, juxtaposed prisms in both horizontal directions. To estimate the magnetization of the prisms, assume that the top and bottom of the magnetic sources are horizontal. Minimization of the first-order entropy favors solutions with sharp borders, and the maximization of zeroth-order entropy prevents the tendency of the estimated source to become a single prism with large magnetization. Thus, a judicious combination of both constraints can lead to solutions characterized by regions with virtually constant magnetizations separated by sharp discontinuities. This is applied to synthetic data from simulated intrusive bodies in sediments that have horizontal tops. By comparing the results with those obtained with the common Tikhonov regularization (smoothness constraint) method, it is shown that both methods produce good and equivalent locations of the central positions of the sources. However, entropic regularization delineates the boundaries of the bodies with greater detail. Both the proposed and the smoothness constraints are applied to real anomaly data over a magnetic skarn in Butte Valley, Nevada, U.S.A. Entropic regularization produced an estimated magnetization distribution with sharper boundaries, smaller volume, and higher apparent magnetization as compared with results produced by incorporating the smoothness constraint.

Geophysics ◽  
2007 ◽  
Vol 72 (4) ◽  
pp. I51-I60 ◽  
Author(s):  
João B. C. Silva ◽  
Francisco S. Oliveira ◽  
Valéria C. F. Barbosa ◽  
Haroldo F. Campos Velho

We present a new apparent-density mapping method on the horizontal plane that combines the minimization of the first-order entropy with the maximization of the zeroth-order entropy of the estimated density contrasts. The interpretation model consists of a grid of vertical, juxtaposed prisms in both horizontal directions. We assume that the top and the bottom of the gravity sources are flat and horizontal and estimate the prisms’ density contrasts. The minimization of the first-order entropy favors solutions presenting sharp borders, and the maximization of the zeroth-order entropy prevents the tendency of the source estimate to become a single prism. Thus, a judicious combination of both constraints may lead to solutions characterized by regions with virtually constant estimated density contrasts separated by sharp discontinuities. We apply our method to synthetic data from simulated intrusive bodies in sediments that present flat and horizontal tops. By comparing our results with those obtained with the smoothness constraint, we show that both methods produce good and equivalent locations of the sources’ central positions. However, the entropic regularization delineates the boundaries of the bodies with greater resolution, even in the case of 100-m-wide bodies separated by a distance as small as [Formula: see text]. Both the proposed and the global smoothness constraints are applied to real anomalies from the eastern Alps and from the Matsitama intrusive complex, northeastern Botswana. In the first case, the entropic regularization delineates two sources, with a horizontal and nearly flat top being consistent with the known geologic information. In the second case, both constraints produce virtually the same estimate, indicating, in agreement with results of synthetic tests, that the tops of the sources are neither flat nor horizontal.


Geophysics ◽  
2007 ◽  
Vol 72 (2) ◽  
pp. S123-S132 ◽  
Author(s):  
Alison E. Malcolm ◽  
Maarten V. de Hoop ◽  
Henri Calandra

First-order internal multiples are a source of coherent noise in seismic images because they do not satisfy the single-scattering assumption fundamental to most seismic processing. There are a number of techniques to estimate internal multiples in data; in many cases, these algorithms leave some residual multiple energy in the data. This energy produces artifacts in the image, and the location of these artifacts is unknown because the multiples were estimated in the data before the image was formed. To avoid this problem, we propose a method by which the artifacts caused by internal multiples are estimated directly in the image. We use ideas from the generalized Bremmer series and the Lippmann-Schwinger scattering series to create a forward-scattering series to model multiples and an inverse-scattering series to describethe impact these multiples have on the common-image gather and the image. We present an algorithm that implements the third term of this series, responsible for the formation of first-order in-ternal multiples. The algorithm works as part of a wave-equation migration; the multiple estimation is made at each depth using a technique related to one used to estimate surface-related multi-ples. This method requires knowledge of the velocity model to the depth of the shallowest reflector involved in the generation of the multiple of interest. This information allows us to estimate internal multiples without assumptions inherent to other methods. In particular, we account for the formation of caustics. Results of the techniques on synthetic data illustrate the kinematic accuracy of predicted multiples, and results on field data illustrate the potential of estimating artifacts caused by internal multiples in the image rather than in the data.


Author(s):  
Amarjot Singh Bhullar ◽  
Gospel Ezekiel Stewart ◽  
Robert W. Zimmerman

Abstract Most analyses of fluid flow in porous media are conducted under the assumption that the permeability is constant. In some “stress-sensitive” rock formations, however, the variation of permeability with pore fluid pressure is sufficiently large that it needs to be accounted for in the analysis. Accounting for the variation of permeability with pore pressure renders the pressure diffusion equation nonlinear and not amenable to exact analytical solutions. In this paper, the regular perturbation approach is used to develop an approximate solution to the problem of flow to a linear constant-pressure boundary, in a formation whose permeability varies exponentially with pore pressure. The perturbation parameter αD is defined to be the natural logarithm of the ratio of the initial permeability to the permeability at the outflow boundary. The zeroth-order and first-order perturbation solutions are computed, from which the flux at the outflow boundary is found. An effective permeability is then determined such that, when inserted into the analytical solution for the mathematically linear problem, it yields a flux that is exact to at least first order in αD. When compared to numerical solutions of the problem, the result has 5% accuracy out to values of αD of about 2—a much larger range of accuracy than is usually achieved in similar problems. Finally, an explanation is given of why the change of variables proposed by Kikani and Pedrosa, which leads to highly accurate zeroth-order perturbation solutions in radial flow problems, does not yield an accurate result for one-dimensional flow. Article Highlights Approximate solution for flow to a constant-pressure boundary in a porous medium whose permeability varies exponentially with pressure. The predicted flowrate is accurate to within 5% for a wide range of permeability variations. If permeability at boundary is 30% less than initial permeability, flowrate will be 10% less than predicted by constant-permeability model.


2002 ◽  
Vol 29 (2) ◽  
pp. 161-182 ◽  
Author(s):  
Lening Zhang ◽  
John W. Welte ◽  
William F. Wieczorek

The Buffalo Longitudinal Study of Young Men was used to address the possibility of a common factor underlying adolescent problem behaviors. First, a measurement model with a single first-order factor was compared to a model with three separate correlated first-order factors. The three-factor model was better supported, making it logical to conduct a second-order factor analysis, which confirmed the logic. Second, a substantive model was estimated in each of two waves with psychopathic state as the common factor predicting drinking, drug use, and delinquency. Psychopathic state was stable across waves. The theory that a single latent variable accounts for large covariance among adolescent problem behaviors was supported.


1998 ◽  
Vol 13 (39) ◽  
pp. 3169-3177 ◽  
Author(s):  
IOANNIS GIANNAKIS ◽  
K. KLEIDIS ◽  
A. KUIROUKIDIS ◽  
D. PAPADOPOULOS

We study string propagation in an anisotropic, cosmological background. We solve the equations of motion and the constraints by performing a perturbative expansion of the string coordinates in powers if c2 — the worldsheet speed of light. To zeroth order the string is approximated by a tensionless string (since c is proportional to the string tension T). We obtain exact, analytical expressions for the zeroth- and first-order solutions and we discuss some cosmological implications.


Author(s):  
J. Pegna ◽  
F.-E. Wolter

Abstract Computer Aided Geometric Design of surfaces sometimes presents problems that were not envisioned by mathematicians in differential geometry. This paper presents mathematical results that pertain to the design of second order smooth blending surfaces. Second order smoothness normally requires that normal curvatures agree along all tangent directions at all points of the common boundary of two patches, called the linkage curve. The Linkage Curve Theorem proved here shows that, for the blend to be second order smooth when it is already first order smooth, it is sufficient that normal curvatures agree in one direction other than the tangent to a first order continuous linkage curve. This result is significant for it substantiates earlier works in computer aided geometric design. It also offers simple practical means of generating second order blends for it reduces the dimensionality of the problem to that of curve fairing, and is well adapted to a formulation of the blend surface using sweeps. From a theoretical viewpoint, it is remarkable that one can generate second order smooth blends with the assumption that the linkage curve is only first order smooth. This property may be helpful to the designer since linkage curves can be constructed from low order piecewise continuous curves.


1983 ◽  
Vol 27 (01) ◽  
pp. 13-33
Author(s):  
Francis Noblesse

A new slender-ship theory of wave resistance is presented. Specifically, a sequence of explicit slender-ship wave-resistance approximations is obtained. These approximations are associated with successive approximations in a slender-ship iterative procedure for solving a new (nonlinear integro-differential) equation for the velocity potential of the flow caused by the ship. The zeroth, first, and second-order slender-ship approximations are given explicitly and examined in some detail. The zeroth-order slender-ship wave-resistance approximation, r(0) is obtained by simply taking the (disturbance) potential, ϕ, as the trivial zeroth-order slender-ship approximation ϕ(0) = 0 in the expression for the Kochin free-wave amplitude function; the classical wave-resistance formulas of Michell [1]2 and Hogner [2] correspond to particular cases of this simple approximation. The low-speed wave-resistance formulas proposed by Guevel [3], Baba [4], Maruo [5], and Kayo [6] are essentially equivalent (for most practical purposes) to the first-order slender-ship low-Froude-number approximation, rlF(1), which is a particular case of the first-order slender-ship approximation r(1): specifically, the first-order slender-ship wave-resistance approximation r(1) is obtained by approximating the potential ϕ in the expression for the Kochin function by the first-order slender-ship potential ϕ1 whereas the low-Froude-number approximation rlF(1) is associated with the zero-Froude-number limit ϕ0(1) of the potentialϕ(1). A major difference between the first-order slender-ship potential ϕ(1) and its zero-Froude-number limit ϕ0(1) resides in the waves that are included in the potential ϕ(1) but are ignored in the zero-Froude-number potential ϕ0(1). Results of calculations by C. Y. Chen for the Wigley hull show that the waves in the potential ϕ(1) have a remarkable effect upon the wave resistance, in particular causing a large phase shift of the wave-resistance curve toward higher values of the Froude number. As a result, the first-order slender-ship wave-resistance approximation in significantly better agreement with experimental data than the low-Froude-number approximation rlF(1) and the approximations r(0) and rM.


Author(s):  
Thomas König ◽  
Daniel Finke

This chapter examines the transformation of the Convention's proposal on the Treaty Establishing a Constitution for Europe to the Lisbon Treaty in the aftermath of the two negative referendums from a principal-agent perspective. It shows that the common view of unitary member states, in which principals and agents share interests in the revision of treaties, can only partially—if not wrongly—explain the Treaty of Lisbon. The principal-agent analysis reveals that the political leaders delegated power to negotiating agents who worked out compromise solutions by partially revising the initial interests of their first order principals, the political leaders. Governmental agents from smaller countries were able to focus the negotiations on a few central reform issues, such as the number of Commissioners and the voting rules of the Council, and they also successfully influenced the final outcome of these issues. A major reason for their success was their credibility, which they could increase by pointing to integration-skeptic voters—particularly in countries that had announced a referendum. Hence, governmental agents increased their bargaining efficiency by referring to voters as their second-order principals.


2020 ◽  
Vol 34 (07) ◽  
pp. 11580-11587
Author(s):  
Haojie Liu ◽  
Han Shen ◽  
Lichao Huang ◽  
Ming Lu ◽  
Tong Chen ◽  
...  

Traditional video compression technologies have been developed over decades in pursuit of higher coding efficiency. Efficient temporal information representation plays a key role in video coding. Thus, in this paper, we propose to exploit the temporal correlation using both first-order optical flow and second-order flow prediction. We suggest an one-stage learning approach to encapsulate flow as quantized features from consecutive frames which is then entropy coded with adaptive contexts conditioned on joint spatial-temporal priors to exploit second-order correlations. Joint priors are embedded in autoregressive spatial neighbors, co-located hyper elements and temporal neighbors using ConvLSTM recurrently. We evaluate our approach for the low-delay scenario with High-Efficiency Video Coding (H.265/HEVC), H.264/AVC and another learned video compression method, following the common test settings. Our work offers the state-of-the-art performance, with consistent gains across all popular test sequences.


Sign in / Sign up

Export Citation Format

Share Document