Uncertainty Space Expansion: A Consistent Integration of Measurement Errors in Linear Inversion

SPE Journal ◽  
2020 ◽  
Vol 25 (06) ◽  
pp. 3317-3331
Author(s):  
Pipat Likanapaisal ◽  
Hamdi A. Tchelepi

Summary In general, a probabilistic framework for a modeling process involves two uncertainty spaces: model parameters and state variables (or predictions). The two uncertainty spaces in reservoir simulation are connected by the governing equations of flow and transport in porous media in the form of a reservoir simulator. In a forward problem (or a predictive run), the reservoir simulator directly maps the uncertainty space of the model parameters to the uncertainty space of the state variables. Conversely, an inverse problem (or history matching) aims to improve the descriptions of the model parameters by using the measurements of state variables. However, we cannot solve the inverse problem directly in practice. Numerous algorithms, including Kriging-based inversion and the ensemble Kalman filter (EnKF) and its many variants, simplify the system by using a linear assumption. The purpose of this paper is to improve the integration of measurement errors in the history-matching algorithms that rely on the linear assumption. The statistical moment equation (SME) approach with the Kriging-based inversion algorithm is used to illustrate several practical examples. In the Motivation section, an example of pressure conditioning has a measurement that contains no additional information because of its significant measurement error. This example highlights the inadequacy of the current method that underestimates the conditional uncertainty for both model parameters and predictions. Accordingly, we derive a new formula that recognizes the absence of additional information and preserves the unconditional uncertainty. We believe this to be the consistent behavior to integrate measurement errors. Other examples are used to validate the new formula with both linear and nonlinear (i.e., the saturation equation) problems, with single and multiple measurements, and with different configurations of measurement errors. For broader applications, we also develop an equivalent formula for algorithms in the Monte Carlo simulation (MCS) approach, such as EnKF and ensemble smoother (ES).

Author(s):  
Geir Evensen

AbstractIt is common to formulate the history-matching problem using Bayes’ theorem. From Bayes’, the conditional probability density function (pdf) of the uncertain model parameters is proportional to the prior pdf of the model parameters, multiplied by the likelihood of the measurements. The static model parameters are random variables characterizing the reservoir model while the observations include, e.g., historical rates of oil, gas, and water produced from the wells. The reservoir prediction model is assumed perfect, and there are no errors besides those in the static parameters. However, this formulation is flawed. The historical rate data only approximately represent the real production of the reservoir and contain errors. History-matching methods usually take these errors into account in the conditioning but neglect them when forcing the simulation model by the observed rates during the historical integration. Thus, the model prediction depends on some of the same data used in the conditioning. The paper presents a formulation of Bayes’ theorem that considers the data dependency of the simulation model. In the new formulation, one must update both the poorly known model parameters and the rate-data errors. The result is an improved posterior ensemble of prediction models that better cover the observations with more substantial and realistic uncertainty. The implementation accounts correctly for correlated measurement errors and demonstrates the critical role of these correlations in reducing the update’s magnitude. The paper also shows the consistency of the subspace inversion scheme by Evensen (Ocean Dyn. 54, 539–560 2004) in the case with correlated measurement errors and demonstrates its accuracy when using a “larger” ensemble of perturbations to represent the measurement error covariance matrix.


2008 ◽  
Vol 5 (3) ◽  
pp. 1641-1675 ◽  
Author(s):  
A. Bárdossy ◽  
S. K. Singh

Abstract. The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives an unique and very best parameter vector. The parameters of hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on the half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study) for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.


SPE Journal ◽  
2009 ◽  
Vol 15 (02) ◽  
pp. 509-525 ◽  
Author(s):  
Yudou Wang ◽  
Gaoming Li ◽  
Albert C. Reynolds

Summary With the ensemble Kalman filter (EnKF) or smoother (EnKS), it is easy to adjust a wide variety of model parameters by assimilation of dynamic data. We focus first on the case where realizations and estimates of the depths of the initial fluid contacts, as well as grid- block rock-property fields, are generated by matching production data with the EnKS. Then we add the parameters defining power law relative permeability curves to the set of parameters estimated by assimilating production data with EnKS. The efficiency of EnKF and EnKS arises because data are assimilated sequentially in time and so "history matching data" requires only one forward run of the reservoir simulator for each ensemble member. For EnKS and EnKF to yield reliable characterizations of the uncertainty in model parameters and future performance predictions, the updated reservoir-simulation variables (e.g., saturations and pressures) must be statistically consistent with the realizations of these variables that would be obtained by rerunning the simulator from time zero using the updated model parameters. This statistical consistency can be established only under assumptions of Gaussi- anity and linearity that do not normally hold. Here, we use iterative EnKS methods that are statistically consistent, and show that, for the problems considered here, iteration significantly improves the performance of EnKS.


Geophysics ◽  
2019 ◽  
Vol 84 (3) ◽  
pp. E209-E223
Author(s):  
Juan Luis Fernández-Martínez ◽  
Zulima Fernández-Muñiz ◽  
Shan Xu ◽  
Ana Cernea ◽  
Colette Sirieix ◽  
...  

We have evaluated the uncertainty analysis of the 3D electrical tomography inverse problem using model reduction via singular-value decomposition and performed sampling of the nonlinear equivalence region via an explorative member of the particle swarm optimization (PSO) family. The procedure begins with the local inversion of the observed data to find a good resistivity model located in the nonlinear equivalence region. Then, the dimensionality is reduced via the spectral decomposition of the 3D geophysical model. Finally, the exploration of the uncertainty space is performed via an exploratory version of PSO (RR-PSO). This sampling methodology does not prejudge where the initial model comes from as long as this model has a geologic meaning. The 3D subsurface conductivity distribution is arranged as a 2D matrix by ordering the conductivity values contained in a given earth section as a column array and stacking parallel sections as columns of the matrix. There are three basic modes of ordering: mode 1 and mode 2, by using vertical sections in two perpendicular directions, and mode 3, by using horizontal sections. The spectral decomposition is then performed using these three 2D modes. Using this approach, it is possible to sample the uncertainty space of the 3D electrical resistivity inverse problem very efficiently. This methodology is intrinsically parallelizable and could be run for different initial models simultaneously. We found the application to a synthetic data set that is well-known in the literature related to this subject, obtaining a set of surviving geophysical models located in the nonlinear equivalence region that can be used to approximate numerically the posterior distribution of the geophysical model parameters (frequentist approach). Based on these models, it is possible to perform the probabilistic segmentation of the inverse solution found, meanwhile answering geophysical questions with its corresponding uncertainty assessment. This methodology has a general character could be applied to any other 3D nonlinear inverse problems by implementing their corresponding forward model.


SPE Journal ◽  
2010 ◽  
Vol 16 (02) ◽  
pp. 331-342 ◽  
Author(s):  
Hemant A. Phale ◽  
Dean S. Oliver

Summary When the ensemble Kalman filter (EnKF) is used for history matching, the resulting updates to reservoir properties sometimes exceed physical bounds, especially when the problem is highly nonlinear. Problems of this type are often encountered during history matching compositional models using the EnKF. In this paper, we illustrate the problem using an example in which the updated molar density of CO2 in some regions is observed to take negative values while molar densities of the remaining components are increased. Standard truncation schemes avoid negative values of molar densities but do not address the problem of increased molar densities of other components. The results can include a spurious increase in reservoir pressure with a subsequent inability to maintain injection. In this paper, we present a method for constrained EnKF (CEnKF), which takes into account the physical constraints on the plausible values of state variables during data assimilation. In the proposed method, inequality constraints are converted to a small number of equality constraints, which are used as virtual observations for calibrating the model parameters within plausible ranges. The CEnKF method is tested on a 2D compositional model and on a highly heterogeneous three-phase-flow reservoir model. The effect of the constraints on mass conservation is illustrated using a 1D Buckley-Leverett flow example. Results show that the CEnKF technique is able to enforce the nonnegativity constraints on molar densities and the bound constraints on saturations (all phase saturations must be between 0 and 1) and achieve a better estimation of reservoir properties than is obtained using only truncation with the EnKF.


2008 ◽  
Vol 12 (6) ◽  
pp. 1273-1283 ◽  
Author(s):  
A. Bárdossy ◽  
S. K. Singh

Abstract. The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study) for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.


Geophysics ◽  
1995 ◽  
Vol 60 (3) ◽  
pp. 899-911 ◽  
Author(s):  
Gregory Newman

The crosswell electromagnetic (EM) inverse problem is solved with an integral‐equation (IE) formulation using successive Born approximations in the frequency domain. Because the inverse problem is nonlinear, the predicted fields and Green’s functions are continually updated. Updating the fields and Green’s functions relates small changes in the predicted data to small changes in the model parameters through Fréchet kernels. These fields and Green functions are calculated with an efficient 3-D finite‐difference solver. Since the resistivity is invariant along strike, the 3-D fields are integrated along strike so the 2-D kernels can be assembled. At the early stages of the inversion, smoothing of the electrical conductivity stabilizes the inverse solution when it is far from convergence. As the solution converges, this smoothing is relaxed and more effort is made to reduce the data misfit. Bounds on the conductivity are included in the solution to eliminate unrealistic estimates. The robustness of the inversion scheme has been demonstrated with synthetic and field data that are underdetermined from the standpoint of the smooth models being sought. Two synthetic examples with added Gaussian noise were considered, including data arising from an IE solver. This IE solver is different from the one embedded in the inversion algorithm and has provided a stronger check on the scheme. The synthetic examples show it is more difficult to reconstruct a target’s conductivity than its geometry at a single frequency. The inversion scheme has been successfully tested using data collected at the Richmond‐field site near Berkeley, California, where it has imaged a salt water plume injected into the interwell region. The data in this experiment consisted of two sets of measurements, taken before and after the injection of 50 000 gallons of 1 Ωm salt water. Findings show that underdetermined inversion using small amounts of field data can be sufficient to produce useful, but smoothed, maps of the conductivity. The data in this instance need be only single frequency and single component.


Author(s):  
Marcello Pericoli ◽  
Marco Taboga

Abstract We propose a general method for the Bayesian estimation of a very broad class of non-linear no-arbitrage term-structure models. The main innovation we introduce is a computationally efficient method, based on deep learning techniques, for approximating no-arbitrage model-implied bond yields to any desired degree of accuracy. Once the pricing function is approximated, the posterior distribution of model parameters and unobservable state variables can be estimated by standard Markov Chain Monte Carlo methods. As an illustrative example, we apply the proposed techniques to the estimation of a shadow-rate model with a time-varying lower bound and unspanned macroeconomic factors.


2017 ◽  
Vol 65 (4) ◽  
pp. 479-488 ◽  
Author(s):  
A. Boboń ◽  
A. Nocoń ◽  
S. Paszek ◽  
P. Pruski

AbstractThe paper presents a method for determining electromagnetic parameters of different synchronous generator models based on dynamic waveforms measured at power rejection. Such a test can be performed safely under normal operating conditions of a generator working in a power plant. A generator model was investigated, expressed by reactances and time constants of steady, transient, and subtransient state in the d and q axes, as well as the circuit models (type (3,3) and (2,2)) expressed by resistances and inductances of stator, excitation, and equivalent rotor damping circuits windings. All these models approximately take into account the influence of magnetic core saturation. The least squares method was used for parameter estimation. There was minimized the objective function defined as the mean square error between the measured waveforms and the waveforms calculated based on the mathematical models. A method of determining the initial values of those state variables which also depend on the searched parameters is presented. To minimize the objective function, a gradient optimization algorithm finding local minima for a selected starting point was used. To get closer to the global minimum, calculations were repeated many times, taking into account the inequality constraints for the searched parameters. The paper presents the parameter estimation results and a comparison of the waveforms measured and calculated based on the final parameters for 200 MW and 50 MW turbogenerators.


Sign in / Sign up

Export Citation Format

Share Document