Integrated Work Flow of Preserving Facies Realism in History Matching: Application to the Brugge Field

SPE Journal ◽  
2016 ◽  
Vol 21 (04) ◽  
pp. 1413-1424 ◽  
Author(s):  
Yuqing Chang ◽  
Andreas S. Stordal ◽  
Randi Valestrand

Summary Data assimilation with ensemble-based inversion methods was successfully applied for parameter estimation in reservoir models. However, in certain complex-reservoir models, it remains challenging to estimate the model parameters and to preserve the geological realism simultaneously. In particular, when handling special-reservoir model parameters such as facies types concerning fluvial channels, one must realize that geological realism becomes one of the key concerns. The main objective of this work is to address these issues for a complex field with a newly extended version of a recently proposed facies-parameterization approach coupled with an ensemble-based data assimilation method. The proposed workflow combines the new facies parameterization and the adaptive gaussian mixture (AGM) filter into the data assimilation framework for channelized reservoirs. To handle discrete-facies parameters, we combine probability maps and truncated Gaussian fields to obtain a continuous parameterization of the facies fields. For the data assimilation, we use the AGM filter, which is an efficient history matching approach that incorporates a resampling routine that allows us to regenerate facies fields with information from the updated probability maps. This work flow is evaluated, for the first time, on a complex field case—the Brugge field. This reservoir model consists of layers with complex channelized structures and layers characterized by reservoir properties generated with variograms. With limited prior knowledge on the facies model, this work flow is shown to be able to preserve the channel continuity while reducing the reservoir model uncertainty with AGM. When applied to a complex reservoir, the proposed work flow provides a geologically consistent and realistic reservoir model that leads to improved capability of predicting subsurface flow behaviors.

SPE Journal ◽  
2010 ◽  
Vol 16 (02) ◽  
pp. 331-342 ◽  
Author(s):  
Hemant A. Phale ◽  
Dean S. Oliver

Summary When the ensemble Kalman filter (EnKF) is used for history matching, the resulting updates to reservoir properties sometimes exceed physical bounds, especially when the problem is highly nonlinear. Problems of this type are often encountered during history matching compositional models using the EnKF. In this paper, we illustrate the problem using an example in which the updated molar density of CO2 in some regions is observed to take negative values while molar densities of the remaining components are increased. Standard truncation schemes avoid negative values of molar densities but do not address the problem of increased molar densities of other components. The results can include a spurious increase in reservoir pressure with a subsequent inability to maintain injection. In this paper, we present a method for constrained EnKF (CEnKF), which takes into account the physical constraints on the plausible values of state variables during data assimilation. In the proposed method, inequality constraints are converted to a small number of equality constraints, which are used as virtual observations for calibrating the model parameters within plausible ranges. The CEnKF method is tested on a 2D compositional model and on a highly heterogeneous three-phase-flow reservoir model. The effect of the constraints on mass conservation is illustrated using a 1D Buckley-Leverett flow example. Results show that the CEnKF technique is able to enforce the nonnegativity constraints on molar densities and the bound constraints on saturations (all phase saturations must be between 0 and 1) and achieve a better estimation of reservoir properties than is obtained using only truncation with the EnKF.


2021 ◽  
Author(s):  
Guohua Gao ◽  
Jeroen Vink ◽  
Fredrik Saaf ◽  
Terence Wells

Abstract When formulating history matching within the Bayesian framework, we may quantify the uncertainty of model parameters and production forecasts using conditional realizations sampled from the posterior probability density function (PDF). It is quite challenging to sample such a posterior PDF. Some methods e.g., Markov chain Monte Carlo (MCMC), are very expensive (e.g., MCMC) while others are cheaper but may generate biased samples. In this paper, we propose an unconstrained Gaussian Mixture Model (GMM) fitting method to approximate the posterior PDF and investigate new strategies to further enhance its performance. To reduce the CPU time of handling bound constraints, we reformulate the GMM fitting formulation such that an unconstrained optimization algorithm can be applied to find the optimal solution of unknown GMM parameters. To obtain a sufficiently accurate GMM approximation with the lowest number of Gaussian components, we generate random initial guesses, remove components with very small or very large mixture weights after each GMM fitting iteration and prevent their reappearance using a dedicated filter. To prevent overfitting, we only add a new Gaussian component if the quality of the GMM approximation on a (large) set of blind-test data sufficiently improves. The unconstrained GMM fitting method with the new strategies proposed in this paper is validated using nonlinear toy problems and then applied to a synthetic history matching example. It can construct a GMM approximation of the posterior PDF that is comparable to the MCMC method, and it is significantly more efficient than the constrained GMM fitting formulation, e.g., reducing the CPU time by a factor of 800 to 7300 for problems we tested, which makes it quite attractive for large scale history matching problems.


2019 ◽  
Vol 23 (6) ◽  
pp. 1331-1347 ◽  
Author(s):  
Miguel Alfonzo ◽  
Dean S. Oliver

Abstract It is common in ensemble-based methods of history matching to evaluate the adequacy of the initial ensemble of models through visual comparison between actual observations and data predictions prior to data assimilation. If the model is appropriate, then the observed data should look plausible when compared to the distribution of realizations of simulated data. The principle of data coverage alone is, however, not an effective method for model criticism, as coverage can often be obtained by increasing the variability in a single model parameter. In this paper, we propose a methodology for determining the suitability of a model before data assimilation, particularly aimed for real cases with large numbers of model parameters, large amounts of data, and correlated observation errors. This model diagnostic is based on an approximation of the Mahalanobis distance between the observations and the ensemble of predictions in high-dimensional spaces. We applied our methodology to two different examples: a Gaussian example which shows that our shrinkage estimate of the covariance matrix is a better discriminator of outliers than the pseudo-inverse and a diagonal approximation of this matrix; and an example using data from the Norne field. In this second test, we used actual production, repeat formation tester, and inverted seismic data to evaluate the suitability of the initial reservoir simulation model and seismic model. Despite the good data coverage, our model diagnostic suggested that model improvement was necessary. After modifying the model, it was validated against the observations and is now ready for history matching to production and seismic data. This shows that the proposed methodology for the evaluation of the adequacy of the model is suitable for large realistic problems.


SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 464-479 ◽  
Author(s):  
B. Todd Hoffman ◽  
Jef K. Caers ◽  
Xian-Huan Wen ◽  
Sebastien B. Strebelle

Summary This paper presents an innovative methodology to integrate prior geologic information, well-log data, seismic data, and production data into a consistent 3D reservoir model. Furthermore, the method is applied to a real channel reservoir from the African coast. The methodology relies on the probability-perturbation method (PPM). Perturbing probabilities rather than actual petrophysical properties guarantees that the conceptual geologic model is maintained and that any history-matching-related artifacts are avoided. Creating reservoir models that match all types of data are likely to have more prediction power than methods in which some data are not honored. The first part of the paper reviews the details of the PPM, and the next part of this paper describes the additional work that is required to history-match real reservoirs using this method. Then, a geological description of the reservoir case study is provided, and the procedure to build 3D reservoir models that are only conditioned to the static data is covered. Because of the character of the field, the channels are modeled with a multiple-point geostatistical method. The channel locations are perturbed in a manner such that the oil, water, and gas rates from the reservoir more accurately match the rates observed in the field. Two different geologic scenarios are used, and multiple history-matched models are generated for each scenario. The reservoir has been producing for approximately 5 years, but the models are matched only to the first 3 years of production. Afterward, to check predictive power, the matched models are run for the last 1½ years, and the results compare favorably with the field data. Introduction Reservoir models are constructed to better understand reservoir behavior and to better predict reservoir response. Economic decisions are often based on the predictions from reservoir models; therefore, such predictions need to be as accurate as possible. To achieve this goal, the reservoir model should honor all sources of data, including well-log, seismic, geologic information, and dynamic (production rate and pressure) data. Incorporating dynamic data into the reservoir model is generally known as history matching. History matching is difficult because it poses a nonlinear inverse problem in the sense that the relationship between the reservoir model parameters and the dynamic data is highly nonlinear and multiple solutions are avail- able. Therefore, history matching is often done with a trial-and-error method. In real-world applications of history matching, reservoir engineers manually modify an initial model provided by geoscientists until the production data are matched. The initial model is built based on geological and seismic data. While attempts are usually made to honor these other data as much as possible, often the history-matched models are unrealistic from a geological (and geophysical) point of view. For example, permeability is often altered to increase or decrease flow in areas where a mismatch is observed; however, the permeability alterations usually come in the form of box-shaped or pipe-shaped geometries centered around wells or between wells and tend to be devoid of any geologica. considerations. The primary focus lies in obtaining a history match.


2021 ◽  
Author(s):  
Mohammed Abd-Allah ◽  
Ahmed Abdelrahman ◽  
Luke Van Den Brul ◽  
Taha Taha ◽  
Mohammad Ali Javed

Abstract Economic evaluation of exploration and production projects ensures a positive return for asset operators and stakeholders and evaluates risk in field development decisions related to both reservoir model uncertainties and fluctuations in oil and gas prices. Traditionally, such evaluation is performed manually and deterministically using single or limited number of cases (limited number of reservoir models and few values of economic parameters). Such traditional approach does not integrate seismic-to-simulation reservoir model uncertainties, the reservoir model used is often unreliable due to inconsistent property modifications during the history matching process, full span of prediction uncertainty isn't properly propagated for economic evaluation and the whole process is not fully automated. This paper presents an integrated and automated forward modelling approach where static and dynamic models are connected to integrate the impact of uncertainties at the different modelling stages (seismic interpretation through geological modelling to dynamic simulation and further to economic evaluations). The approach is demonstrated using synthetic 3D model data mimicking a real North Sea field. It starts by building an integrated modelling workflow that can capture the various reservoir model uncertainties at different stages to automatically generate multiple probable model realisations. Proxy models are constructed and used to refine the history match in successive batches. For each prediction development scenario, prediction probabilities are estimated using posterior ensemble of geologically consistent runs that matches historical observed data. The ensemble of reservoir models is automatically evaluated against different possible economic scenarios. The approach presents a seamless and innovative workflow that benefits from new-generation hardware and software, enables faster simultaneous realisations, produces consistent and more reliable reservoir models. Probabilistic economic evaluation concept is implemented to calculate the statistical probabilities of economic indicators.


SPE Journal ◽  
2020 ◽  
Vol 25 (06) ◽  
pp. 3349-3365
Author(s):  
Azadeh Mamghaderi ◽  
Babak Aminshahidy ◽  
Hamid Bazargan

Summary Using fast and reliable proxies instead of sophisticated and time-consuming reservoir simulators is of great importance in reservoir management. The capacitance-resistance model (CRM) as a fast proxy has been widely used in this area. However, the inadequacy of this proxy for simplifying complex reservoirs with a limited number of parameters has not been addressed appropriately in related works in the literature. In this study, potential uncertainties in the modeling of the waterflooding process in the reservoir by the producer-based version of CRM (CRMP) are formulated, leading to embedding a new error-related term into the original formulation of the proxy. Considering a general form of the model error to represent both white and colored noises, a system of a CRMP-error equation is introduced analytically to deal with any type of intrinsic model imperfection. Two approaches are developed for the problem solution including the following: tuning the additional error-related parameters as a complementary stage of a classical history-matching procedure, and updating these parameters simultaneously with the original model parameters in a data-assimilation approach over model training time. To validate the model and show the effectiveness of both solution schemes, the injection and production data of a water-injection procedure in a three-layered reservoir model are used. Results show that the error-related parameters can be matched successfully along with the model original variables either in a routine model calibration procedure or in a data-assimilation approach by using the ensemble-based Kalman filter (EnKF) method. Comparing the average of the obtained range for the liquid rate as the problem output with true data demonstrates the effectiveness of considering model error. This leads to substantial improvement of the results compared with the case of applying the original model without considering the error term.


SPE Journal ◽  
2016 ◽  
Vol 21 (06) ◽  
pp. 2195-2207 ◽  
Author(s):  
Duc H. Le ◽  
Alexandre A. Emerick ◽  
Albert C. Reynolds

Summary Recently, Emerick and Reynolds (2012) introduced the ensemble smoother with multiple data assimilations (ES-MDA) for assisted history matching. With computational examples, they demonstrated that ES-MDA provides both a better data match and a better quantification of uncertainty than is obtained with the ensemble Kalman filter (EnKF). However, similar to EnKF, ES-MDA can experience near ensemble collapse and results in too many extreme values of rock-property fields for complex problems. These negative effects can be avoided by a judicious choice of the ES-MDA inflation factors, but, before this work, the optimal inflation factors could only be determined by trial and error. Here, we provide two automatic procedures for choosing the inflation factor for the next data-assimilation step adaptively as the history match proceeds. Both methods are motivated by knowledge of regularization procedures—the first is intuitive and heuristical; the second is motivated by existing theory on the regularization of least-squares inverse problems. We illustrate that the adaptive ES-MDA algorithms are superior to the original ES-MDA algorithm by history matching three-phase-flow production data for a complicated synthetic problem in which the reservoir-model parameters include the porosity, horizontal and vertical permeability fields, depths of the initial fluid contacts, and the parameters of power-law permeability curves.


SPE Journal ◽  
2021 ◽  
pp. 1-20
Author(s):  
Guohua Gao ◽  
Jeroen Vink ◽  
Fredrik Saaf ◽  
Terence Wells

Summary When formulating history matching within the Bayesian framework, we may quantify the uncertainty of model parameters and production forecasts using conditional realizations sampled from the posterior probability density function (PDF). It is quite challenging to sample such a posterior PDF. Some methods [e.g., Markov chain Monte Carlo (MCMC)] are very expensive, whereas other methods are cheaper but may generate biased samples. In this paper, we propose an unconstrained Gaussian mixture model (GMM) fitting method to approximate the posterior PDF and investigate new strategies to further enhance its performance. To reduce the central processing unit (CPU) time of handling bound constraints, we reformulate the GMM fitting formulation such that an unconstrained optimization algorithm can be applied to find the optimal solution of unknown GMM parameters. To obtain a sufficiently accurate GMM approximation with the lowest number of Gaussian components, we generate random initial guesses, remove components with very small or very large mixture weights after each GMM fitting iteration, and prevent their reappearance using a dedicated filter. To prevent overfitting, we add a new Gaussian component only if the quality of the GMM approximation on a (large) set of blind-test data sufficiently improves. The unconstrained GMM fitting method with the new strategies proposed in this paper is validated using nonlinear toy problems and then applied to a synthetic history-matching example. It can construct a GMM approximation of the posterior PDF that is comparable to the MCMC method, and it is significantly more efficient than the constrained GMM fitting formulation (e.g., reducing the CPU time by a factor of 800 to 7,300 for problems we tested), which makes it quite attractive for large-scalehistory-matchingproblems. NOTE: This paper is published as part of the 2021 SPE Reservoir Simulation Special Issue.


Geophysics ◽  
2019 ◽  
Vol 85 (1) ◽  
pp. M15-M31 ◽  
Author(s):  
Mingliang Liu ◽  
Dario Grana

We have developed a time-lapse seismic history matching framework to assimilate production data and time-lapse seismic data for the prediction of static reservoir models. An iterative data assimilation method, the ensemble smoother with multiple data assimilation is adopted to iteratively update an ensemble of reservoir models until their predicted observations match the actual production and seismic measurements and to quantify the model uncertainty of the posterior reservoir models. To address computational and numerical challenges when applying ensemble-based optimization methods on large seismic data volumes, we develop a deep representation learning method, namely, the deep convolutional autoencoder. Such a method is used to reduce the data dimensionality by sparsely and approximately representing the seismic data with a set of hidden features to capture the nonlinear and spatial correlations in the data space. Instead of using the entire seismic data set, which would require an extremely large number of models, the ensemble of reservoir models is iteratively updated by conditioning the reservoir realizations on the production data and the low-dimensional hidden features extracted from the seismic measurements. We test our methodology on two synthetic data sets: a simplified 2D reservoir used for method validation and a 3D application with multiple channelized reservoirs. The results indicate that the deep convolutional autoencoder is extremely efficient in sparsely representing the seismic data and that the reservoir models can be accurately updated according to production data and the reparameterized time-lapse seismic data.


Geofluids ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-22 ◽  
Author(s):  
Sungil Kim ◽  
Baehyun Min ◽  
Seoyoon Kwon ◽  
Min-gon Chu

For an ensemble-based history matching of a channelized reservoir, loss of geological plausibility is challenging because of pixel-based manipulation of channel shape and connectivity despite sufficient conditioning to dynamic observations. Regarding the loss as artificial noise, this study designs a serial denoising autoencoder (SDAE) composed of two neural network filters, utilizes this machine learning algorithm for relieving noise effects in the process of ensemble smoother with multiple data assimilation (ES-MDA), and improves the overall history matching performance. As a training dataset of the SDAE, the static reservoir models are realized based on multipoint geostatistics and contaminated with two types of noise: salt and pepper noise and Gaussian noise. The SDAE learns how to eliminate the noise and restore the clean reservoir models. It does this through encoding and decoding processes using the noise realizations as inputs and the original realizations as outputs of the SDAE. The trained SDAE is embedded in the ES-MDA. The posterior reservoir models updated using Kalman gain are imported to the SDAE which then exports the purified prior models of the next assimilation. In this manner, a clear contrast among rock facies parameters during multiple data assimilations is maintained. A case study at a gas reservoir indicates that ES-MDA coupled with the noise remover outperforms a conventional ES-MDA. Improvement in the history matching performance resulting from denoising is also observed for ES-MDA algorithms combined with dimension reduction approaches such as discrete cosine transform, K-singular vector decomposition, and a stacked autoencoder. The results of this study imply that a well-trained SDAE has the potential to be a reliable auxiliary method for enhancing the performance of data assimilation algorithms if the computational cost required for machine learning is affordable.


Sign in / Sign up

Export Citation Format

Share Document