Nonlinear inversion, statistical mechanics, and residual statics estimation

Geophysics ◽  
1985 ◽  
Vol 50 (12) ◽  
pp. 2784-2796 ◽  
Author(s):  
Daniel H. Rothman

Nonlinear inverse problems are usually solved with linearized techniques that depend strongly on the accuracy of initial estimates of the model parameters. With linearization, objective functions can be minimized efficiently, but the risk of local rather than global optimization can be severe. I address the problem confronted in nonlinear inversion when no good initial guess of the model parameters can be made. The fully nonlinear approach presented is rooted in statistical mechanics. Although a large nonlinear problem might appear computationally intractable without linearization, reformulation of the same problem into smaller, interdependent parts can lead to tractable computation while preserving nonlinearities. I formulate inversion as a problem of Bayesian estimation, in which the prior probability distribution is the Gibbs distribution of statistical mechanics. Solutions are then obtained by maximizing the posterior probability of the model parameters. Optimization is performed with a Monte Carlo technique that was originally introduced to simulate the statistical mechanics of systems in equilibrium. The technique is applied to residual statics estimation when statics are unusually large and data are contaminated by noise. Poorly picked correlations (“cycle skips” or “leg jumps”) appear as local minima of the objective function, but global optimization is successfully performed. Further applications to deconvolution and velocity estimation are proposed.

2021 ◽  
Vol 11 (10) ◽  
pp. 4575
Author(s):  
Eduardo Fernández ◽  
Nelson Rangel-Valdez ◽  
Laura Cruz-Reyes ◽  
Claudia Gomez-Santillan

This paper addresses group multi-objective optimization under a new perspective. For each point in the feasible decision set, satisfaction or dissatisfaction from each group member is determined by a multi-criteria ordinal classification approach, based on comparing solutions with a limiting boundary between classes “unsatisfactory” and “satisfactory”. The whole group satisfaction can be maximized, finding solutions as close as possible to the ideal consensus. The group moderator is in charge of making the final decision, finding the best compromise between the collective satisfaction and dissatisfaction. Imperfect information on values of objective functions, required and available resources, and decision model parameters are handled by using interval numbers. Two different kinds of multi-criteria decision models are considered: (i) an interval outranking approach and (ii) an interval weighted-sum value function. The proposal is more general than other approaches to group multi-objective optimization since (a) some (even all) objective values may be not the same for different DMs; (b) each group member may consider their own set of objective functions and constraints; (c) objective values may be imprecise or uncertain; (d) imperfect information on resources availability and requirements may be handled; (e) each group member may have their own perception about the availability of resources and the requirement of resources per activity. An important application of the new approach is collective multi-objective project portfolio optimization. This is illustrated by solving a real size group many-objective project portfolio optimization problem using evolutionary computation tools.


2018 ◽  
Vol 22 (8) ◽  
pp. 4565-4581 ◽  
Author(s):  
Florian U. Jehn ◽  
Lutz Breuer ◽  
Tobias Houska ◽  
Konrad Bestian ◽  
Philipp Kraft

Abstract. The ambiguous representation of hydrological processes has led to the formulation of the multiple hypotheses approach in hydrological modeling, which requires new ways of model construction. However, most recent studies focus only on the comparison of predefined model structures or building a model step by step. This study tackles the problem the other way around: we start with one complex model structure, which includes all processes deemed to be important for the catchment. Next, we create 13 additional simplified models, where some of the processes from the starting structure are disabled. The performance of those models is evaluated using three objective functions (logarithmic Nash–Sutcliffe; percentage bias, PBIAS; and the ratio between the root mean square error and the standard deviation of the measured data). Through this incremental breakdown, we identify the most important processes and detect the restraining ones. This procedure allows constructing a more streamlined, subsequent 15th model with improved model performance, less uncertainty and higher model efficiency. We benchmark the original Model 1 and the final Model 15 with HBV Light. The final model is not able to outperform HBV Light, but we find that the incremental model breakdown leads to a structure with good model performance, fewer but more relevant processes and fewer model parameters.


2018 ◽  
Vol 26 (4) ◽  
pp. 569-596 ◽  
Author(s):  
Yuping Wang ◽  
Haiyan Liu ◽  
Fei Wei ◽  
Tingting Zong ◽  
Xiaodong Li

For a large-scale global optimization (LSGO) problem, divide-and-conquer is usually considered an effective strategy to decompose the problem into smaller subproblems, each of which can then be solved individually. Among these decomposition methods, variable grouping is shown to be promising in recent years. Existing variable grouping methods usually assume the problem to be black-box (i.e., assuming that an analytical model of the objective function is unknown), and they attempt to learn appropriate variable grouping that would allow for a better decomposition of the problem. In such cases, these variable grouping methods do not make a direct use of the formula of the objective function. However, it can be argued that many real-world problems are white-box problems, that is, the formulas of objective functions are often known a priori. These formulas of the objective functions provide rich information which can then be used to design an effective variable group method. In this article, a formula-based grouping strategy (FBG) for white-box problems is first proposed. It groups variables directly via the formula of an objective function which usually consists of a finite number of operations (i.e., four arithmetic operations “[Formula: see text]”, “[Formula: see text]”, “[Formula: see text]”, “[Formula: see text]” and composite operations of basic elementary functions). In FBG, the operations are classified into two classes: one resulting in nonseparable variables, and the other resulting in separable variables. In FBG, variables can be automatically grouped into a suitable number of non-interacting subcomponents, with variables in each subcomponent being interdependent. FBG can easily be applied to any white-box problem and can be integrated into a cooperative coevolution framework. Based on FBG, a novel cooperative coevolution algorithm with formula-based variable grouping (so-called CCF) is proposed in this article for decomposing a large-scale white-box problem into several smaller subproblems and optimizing them respectively. To further enhance the efficiency of CCF, a new local search scheme is designed to improve the solution quality. To verify the efficiency of CCF, experiments are conducted on the standard LSGO benchmark suites of CEC'2008, CEC'2010, CEC'2013, and a real-world problem. Our results suggest that the performance of CCF is very competitive when compared with those of the state-of-the-art LSGO algorithms.


2015 ◽  
Vol 58 (5) ◽  
Author(s):  
Sankar N. Bhattacharya

<p>Sensitivity kernels or partial derivatives of phase velocity (<em>c</em>) and group velocity (<em>U</em>) with respect to medium parameters are useful to interpret a given set of observed surface wave velocity data. In addition to phase velocities, group velocities are also being observed to find the radial anisotropy of the crust and mantle. However, sensitivities of group velocity for a radially anisotropic Earth have rarely been studied. Here we show sensitivities of group velocity along with those of phase velocity to the medium parameters <em>V<sub>SV</sub>, V<sub>SH </sub>, V<sub>PV</sub>, V<sub>PH , </sub></em><em>h</em><em> </em>and density in a radially anisotropic spherical Earth. The peak sensitivities for <em>U</em> are generally twice of those for <em>c</em>; thus <em>U</em> is more efficient than <em>c</em> to explore anisotropic nature of the medium. Love waves mainly depends on <em>V<sub>SH</sub></em> while Rayleigh waves is nearly independent of <em>V<sub>SH</sub></em> . The sensitivities show that there are trade-offs among these parameters during inversion and there is a need to reduce the number of parameters to be evaluated independently. It is suggested to use a nonlinear inversion jointly for Rayleigh and Love waves; in such a nonlinear inversion best solutions are obtained among the model parameters within prescribed limits for each parameter. We first choose <em>V<sub>SH</sub></em>, <em>V<sub>SV </sub></em>and <em>V<sub>PH</sub></em> within their corresponding limits; <em>V<sub>PV</sub></em> and <em>h</em> can be evaluated from empirical relations among the parameters. The density has small effect on surface wave velocities and it can be considered from other studies or from empirical relation of density to average P-wave velocity.</p>


2013 ◽  
Vol 55 (2) ◽  
pp. 109-128 ◽  
Author(s):  
B. L. ROBERTSON ◽  
C. J. PRICE ◽  
M. REALE

AbstractA stochastic algorithm for bound-constrained global optimization is described. The method can be applied to objective functions that are nonsmooth or even discontinuous. The algorithm forms a partition on the search region using classification and regression trees (CART), which defines a region where the objective function is relatively low. Further points are drawn directly from the low region before a new partition is formed. Alternating between partition and sampling phases provides an effective method for nonsmooth global optimization. The sequence of iterates generated by the algorithm is shown to converge to an essential global minimizer with probability one under mild conditions. Nonprobabilistic results are also given when random sampling is replaced with points taken from the Halton sequence. Numerical results are presented for both smooth and nonsmooth problems and show that the method is effective and competitive in practice.


Energies ◽  
2019 ◽  
Vol 12 (7) ◽  
pp. 1242
Author(s):  
Jiangyi Lv ◽  
Hongwen He ◽  
Wei Liu ◽  
Yong Chen ◽  
Fengchun Sun

Accurate and reliable vehicle velocity estimation is greatly motivated by the increasing demands of high-precision motion control for autonomous vehicles and the decreasing cost of the required multi-axis IMU sensors. A practical estimation method for the longitudinal and lateral velocities of electric vehicles is proposed. Two reliable driving empirical judgements about the velocities are extracted from the signals of the ordinary onboard vehicle sensors, which correct the integral errors of the corresponding kinematic equations on a long timescale. Meanwhile, the additive biases of the measured accelerations are estimated recursively by comparing the integral of the measured accelerations with the difference of the estimated velocities between the adjacent strong empirical correction instants, which further compensates the kinematic integral error on short timescale. The algorithm is verified by both the CarSim-Simulink co-simulation and the controller-in-the-loop test under the CarMaker-RoadBox environment. The results show that the velocities can be accurately and reliably estimated under a wide range of driving conditions without prior knowledge of the tire-model and other unavailable signals or frequently changeable model parameters. The relative estimation error of the longitudinal velocity and the absolute estimation error of the lateral velocity are kept within 2% and 0.5 km/h, respectively.


Geophysics ◽  
2020 ◽  
Vol 85 (3) ◽  
pp. D75-D82
Author(s):  
Alireza Shahin ◽  
Mike Myers ◽  
Lori Hathon

Joint modeling and inversion of frequency-dependent dielectric constant and electrical resistivity well-log measurements has been addressed in literature in recent years. However, this problem is not studied for dual-porosity carbonate formations. Besides, the salinity and matrix dielectric constant are presumed to be known in previous studies. We have combined a model for brine dielectric constant and two laboratory-supported models for the electrical resistivity and dielectric constant of dual-porosity carbonates. Using this methodology, we replicate electrical resistivity and dielectric well-log measurements. Using a stochastic global optimization algorithm, we formulate a joint inversion workflow to estimate petrophysical properties of interest. For a constructed dual-porosity carbonate reservoir, we determine that the inversion workflow matches the forward-modeled data for the oil column, water column, and transition zone. We also found that our inversion workflow is capable to retrieve local model parameters (water-filled intergranular porosity and water-filled vuggy porosity) and global model parameters (matrix dielectric constant, lithology exponents for intergranular and vuggy pores, and salinity) with reasonable accuracy.


Author(s):  
Aritra Chakraborty ◽  
M. C. Messner ◽  
T.-L. Sham

Abstract Calibrating inelastic models for high temperature materials used in advanced reactor heat exchangers is a critical aspect in accurately predicting their deformation behavior under different loading conditions, and thus determining the corresponding failure times. The experimental data against which these models are calibrated often contains a wide degree of variability caused by heat-to-heat material property variations and general experimental uncertainty. Most often, model calibration is done against mean of these experimental data without considering this variability. In this work we aim to capture the bounds of the viscoplastic parameter uncertainties that enclose this observed scatter in the experimental data using Bayesian Markov Chain Monte Carlo (MCMC) methods. Bayesian inference provides a probabilistic framework that allows to coherently quantify parameter uncertainties based on some prior parameter distributions and the available data. To perform the statistical Bayesian MCMC analysis, a pre-calibrated model, fitted against mean of the experimental data, is used as an initial guess for the prior distribution and bounds, while further sampling is done using Meteropolis–Hastings algorithm for four Markov chains in tandem, to finally obtain the posterior distribution of the model parameters. Since different inelastic parameters are sensitive to different tests, data from multiple experimental conditions (tensile, and creep) are combined to capture the bounds in all the parameters. The developed statistical model reasonably captures the scatter observed in the experimental data. Quantifying uncertainty in inelastic models will improve high temperature engineering design practice and lead to safer, more effective component designs.


2015 ◽  
Vol 77 (28) ◽  
Author(s):  
Siti Marhainis Othman ◽  
Mohd Fua’ad Rahmat ◽  
Sahazati Md. Rozali ◽  
Sazilah Salleh

Electro-hydraulic actuator (EHA) system inherently suffers from uncertainties, nonlinearities and time- varying in its model parameters which cause the modeling and controller designs are more complicated. Proportional Integral Derivative (PID) control scheme has been proposed and the main problem with its application is to tune the parameters to its optimum values. This study will look into an optimization of PID parameters using particle swarm optimization (PSO). Simulation study has been done in Matlab and Simulink. 


Sign in / Sign up

Export Citation Format

Share Document