Uncertainty Quantification Using Streamline Based Inversion and Distance Based Clustering

2015 ◽  
Vol 138 (1) ◽  
Author(s):  
Jihoon Park ◽  
Jeongwoo Jin ◽  
Jonggeun Choe

For decision making, it is crucial to have proper reservoir characterization and uncertainty assessment of reservoir performances. Since initial models constructed with limited data have high uncertainty, it is essential to integrate both static and dynamic data for reliable future predictions. Uncertainty quantification is computationally demanding because it requires a lot of iterative forward simulations and optimizations in a single history matching, and multiple realizations of reservoir models should be computed. In this paper, a methodology is proposed to rapidly quantify uncertainties by combining streamline-based inversion and distance-based clustering. A distance between each reservoir model is defined as the norm of differences of generalized travel time (GTT) vectors. Then, reservoir models are grouped according to the distances and representative models are selected from each group. Inversions are performed on the representative models instead of using all models. We use generalized travel time inversion (GTTI) for the integration of dynamic data to overcome high nonlinearity and take advantage of computational efficiency. It is verified that the proposed method gathers models with both similar dynamic responses and permeability distribution. It also assesses the uncertainty of reservoir performances reliably, while reducing the amount of calculations significantly by using the representative models.

SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 464-479 ◽  
Author(s):  
B. Todd Hoffman ◽  
Jef K. Caers ◽  
Xian-Huan Wen ◽  
Sebastien B. Strebelle

Summary This paper presents an innovative methodology to integrate prior geologic information, well-log data, seismic data, and production data into a consistent 3D reservoir model. Furthermore, the method is applied to a real channel reservoir from the African coast. The methodology relies on the probability-perturbation method (PPM). Perturbing probabilities rather than actual petrophysical properties guarantees that the conceptual geologic model is maintained and that any history-matching-related artifacts are avoided. Creating reservoir models that match all types of data are likely to have more prediction power than methods in which some data are not honored. The first part of the paper reviews the details of the PPM, and the next part of this paper describes the additional work that is required to history-match real reservoirs using this method. Then, a geological description of the reservoir case study is provided, and the procedure to build 3D reservoir models that are only conditioned to the static data is covered. Because of the character of the field, the channels are modeled with a multiple-point geostatistical method. The channel locations are perturbed in a manner such that the oil, water, and gas rates from the reservoir more accurately match the rates observed in the field. Two different geologic scenarios are used, and multiple history-matched models are generated for each scenario. The reservoir has been producing for approximately 5 years, but the models are matched only to the first 3 years of production. Afterward, to check predictive power, the matched models are run for the last 1½ years, and the results compare favorably with the field data. Introduction Reservoir models are constructed to better understand reservoir behavior and to better predict reservoir response. Economic decisions are often based on the predictions from reservoir models; therefore, such predictions need to be as accurate as possible. To achieve this goal, the reservoir model should honor all sources of data, including well-log, seismic, geologic information, and dynamic (production rate and pressure) data. Incorporating dynamic data into the reservoir model is generally known as history matching. History matching is difficult because it poses a nonlinear inverse problem in the sense that the relationship between the reservoir model parameters and the dynamic data is highly nonlinear and multiple solutions are avail- able. Therefore, history matching is often done with a trial-and-error method. In real-world applications of history matching, reservoir engineers manually modify an initial model provided by geoscientists until the production data are matched. The initial model is built based on geological and seismic data. While attempts are usually made to honor these other data as much as possible, often the history-matched models are unrealistic from a geological (and geophysical) point of view. For example, permeability is often altered to increase or decrease flow in areas where a mismatch is observed; however, the permeability alterations usually come in the form of box-shaped or pipe-shaped geometries centered around wells or between wells and tend to be devoid of any geologica. considerations. The primary focus lies in obtaining a history match.


2020 ◽  
Author(s):  
Konrad Wojnar ◽  
Jon S?trom ◽  
Tore Felix Munck ◽  
Martha Stunell ◽  
Stig Sviland-Østre ◽  
...  

Abstract The aim of the study was to create an ensemble of equiprobable models that could be used for improving the reservoir management of the Vilje field. Qualitative and quantitative workflows were developed to systematically and efficiently screen, analyze and history match an ensemble of reservoir simulation models to production and 4D seismic data. The goal of developing the workflows is to increase the utilization of data from 4D seismic surveys for reservoir characterization. The qualitative and quantitative workflows are presented, describing their benefits and challenges. The data conditioning produced a set of history matched reservoir models which could be used in the field development decision making process. The proposed workflows allowed for identification of outlying prior and posterior models based on key features where observed data was not covered by the synthetic 4D seismic realizations. As a result, suggestions for a more robust parameterization of the ensemble were made to improve data coverage. The existing history matching workflow efficiently integrated with the quantitative 4D seismic history matching workflow allowing for the conditioning of the reservoir models to production and 4D data. Thus, the predictability of the models was improved. This paper proposes a systematic and efficient workflow using ensemble-based methods to simultaneously screen, analyze and history match production and 4D seismic data. The proposed workflow improves the usability of 4D seismic data for reservoir characterization, and in turn, for the reservoir management and the decision-making processes.


Author(s):  
Jiyoon Lee ◽  
Jonggeun Choe

A distance is defined as a measure of dissimilarity between two reservoir models. There have been many distances proposed for fast modeling. However, some distances cause distortion or loss in original permeability distribution of models. To avoid such problems, this study proposes a pattern recognition based distance. The distance is defined by the difference of correlation coefficients between ensemble models. From multi-dimensional scaling, initial 400 ensembles are presented on 2D plane using the distance. Then 10 groups are made by K-medoids clustering. After comparing oil production from each centroid and that of the reference field, 100 models are selected around the best centroid. We validate the clustering by comparing the uncertainty range of 100, 50, and 20 ensemble members sampled from the initial 400 models in box plots and cumulative distribution functions. For a history matching and reservoir characterization, ensemble smoother is applied to the 100 models selected. The proposed method takes only 25% time for simulation showing reliable results compared with the initial 400 models.


SPE Journal ◽  
2008 ◽  
Vol 13 (01) ◽  
pp. 99-111 ◽  
Author(s):  
Vegard R. Stenerud ◽  
Vegard Kippe ◽  
Knut-Andreas Lie ◽  
Akhil Datta-Gupta

Summary A particularly efficient reservoir simulator can be obtained by combining a recent multiscale mixed finite-element flow solver with a streamline method for computing fluid transport. This multiscale-streamline method has shown to be a promising approach for fast flow simulations on high-resolution geologic models with multimillion grid cells. The multiscale method solves the pressure equation on a coarse grid while preserving important fine-scale details in the velocity field. Fine-scale heterogeneity is accounted for through a set of generalized, heterogeneous basis functions that are computed numerically by solving local flow problems. When included in the coarse-grid equations, the basis functions ensure that the global equations are consistent with the local properties of the underlying differential operators. The multiscale method offers a substantial gain in computation speed, without significant loss of accuracy, when basis functions are updated infrequently throughout a dynamic simulation. In this paper, we propose to combine the multiscale-streamline method with a recent "generalized travel-time inversion" method to derive a fast and robust method for history matching high-resolution geocellular models. A key point in the new method is the use of sensitivities that are calculated analytically along streamlines with little computational overhead. The sensitivities are used in the travel-time inversion formulation to give a robust quasilinear method that typically converges in a few iterations and generally avoids much of the time-consuming trial-and-error seen in manual history matching. Moreover, the sensitivities are used to enforce basis functions to be adaptively updated only in areas with relatively large sensitivity to the production response. The sensitivity-based adaptive approach allows us to selectively update only a fraction of the total number of basis functions, which gives substantial savings in computation time for the forward flow simulations. We demonstrate the power and utility of our approach using a simple 2D model and a highly detailed 3D geomodel. The 3D simulation model consists of more than 1,000,000 cells with 69 producing wells. Using our proposed approach, history matching over a period of 7 years is accomplished in less than 20 minutes on an ordinary workstation PC. Introduction It is well known that geomodels derived from static data only—such as geological, seismic, well-log, and core data—often fail to reproduce the production history. Reconciling geomodels to the dynamic response of the reservoir is critical for building reliable reservoir models. In the past few years, there have been significant developments in the area of dynamic data integration through the use of inverse modeling. Streamline methods have shown great promise in this regard (Vasco et al. 1999; Wang and Kovscek 2000; Milliken et al. 2001; He et al. 2002; Al-Harbi et al. 2005; Cheng et al. 2006). Streamline-based methods have the advantages that they are highly efficient "forward" simulators and allow production-response sensitivities to be computed analytically using a single flow simulation (Vasco et al. 1999; He et al. 2002; Al-Harbi et al. 2005; Cheng et al. 2006). Sensitivities describe the change in production responses caused by small perturbations in reservoir properties such as porosity and permeability and are a vital part of many methods for integrating dynamic data. Even though streamline simulators provide fast forward simulation compared with a full finite-difference simulation in 3D, the forward simulation is still the most time-consuming part of the history-matching process. A streamline simulation consists of two steps that are repeated:solution of a 3D pressure equation to compute flow velocities; andsolution of 1D transport equations for evolving fluid compositions along representative sets of streamlines, followed by a mapping back to the underlying pressure grid. The first step is referred to as the "pressure step" and is often the most time-consuming. Consequently, history matching and flow simulation are usually performed on upscaled simulation models, which imposes the need for a subsequent downscaling if the dynamic data are to be integrated in the geomodel. Upscaling and downscaling may result in loss of important fine-scale information.


SPE Journal ◽  
2009 ◽  
Vol 15 (01) ◽  
pp. 31-38 ◽  
Author(s):  
Linah Mohamed ◽  
Mike Christie ◽  
Vasily Demyanov

Summary History matching and uncertainty quantification are two important research topics in reservoir simulation currently. In the Bayesian approach, we start with prior information about a reservoir (e.g., from analog outcrop data) and update our reservoir models with observations (e.g., from production data or time-lapse seismic). The goal of this activity is often to generate multiple models that match the history and use the models to quantify uncertainties in predictions of reservoir performance. A critical aspect of generating multiple history-matched models is the sampling algorithm used to generate the models. Algorithms that have been studied include gradient methods, genetic algorithms, and the ensemble Kalman filter (EnKF). This paper investigates the efficiency of three stochastic sampling algorithms: Hamiltonian Monte Carlo (HMC) algorithm, Particle Swarm Optimization (PSO) algorithm, and the Neighbourhood Algorithm (NA). HMC is a Markov chain Monte Carlo (MCMC) technique that uses Hamiltonian dynamics to achieve larger jumps than are possible with other MCMC techniques. PSO is a swarm intelligence algorithm that uses similar dynamics to HMC to guide the search but incorporates acceleration and damping parameters to provide rapid convergence to possible multiple minima. NA is a sampling technique that uses the properties of Voronoi cells in high dimensions to achieve multiple history-matched models. The algorithms are compared by generating multiple history- matched reservoir models and comparing the Bayesian credible intervals (p10-p50-p90) produced by each algorithm. We show that all the algorithms are able to find equivalent match qualities for this example but that some algorithms are able to find good fitting models quickly, whereas others are able to find a more diverse set of models in parameter space. The effects of the different sampling of model parameter space are compared in terms of the p10-p50-p90 uncertainty envelopes in forecast oil rate. These results show that algorithms based on Hamiltonian dynamics and swarm intelligence concepts have the potential to be effective tools in uncertainty quantification in the oil industry.


2013 ◽  
Vol 748 ◽  
pp. 614-618
Author(s):  
Bao Yi Jiang ◽  
Zhi Ping Li ◽  
Cheng Wen Zhang ◽  
Xi Gang Wang

Numerical reservoir models are constructed from limited available static and dynamic data, and history matching is a process of changing model parameters to find a set of values that will yield a reservoir simulation prediction of data that matches the observed historical production data. To minimize the objective function involved in the history matching procedure, we need to apply the optimization algorithms. This paper is based on the optimization algorithms used in automatic history matching. Several optimization algorithms will be compared in this paper.


SPE Journal ◽  
2007 ◽  
Vol 12 (03) ◽  
pp. 382-391 ◽  
Author(s):  
Mohammad Zafari ◽  
Albert Coburn Reynolds

Summary Recently, the ensemble Kalman Filter (EnKF) has gained popularity in atmospheric science for the assimilation of data and the assessment of uncertainty in forecasts for complex, large-scale problems. A handful of papers have discussed reservoir characterization applications of the EnKF, which can easily and quickly be coupled with any reservoir simulator. Neither adjoint code nor specific knowledge of simulator numerics is required for implementation of the EnKF. Moreover, data are assimilated (matched) as they become available; a suite of plausible reservoir models (the ensemble, set of ensemble members or suite or realizations) is continuously updated to honor data without rematching data assimilated previously. Because of these features, the method is far more efficient for history matching dynamic data than automatic history matching based on optimization algorithms. Moreover, the set of realizations provides a way to evaluate the uncertainty in reservoir description and performance predictions. Here we establish a firm theoretical relation between randomized maximum likelihood and the ensemble Kalman filter. Although we have previously generated reservoir characterization examples where the method worked well, here we also provide examples where the performance of EnKF does not provide a reliable characterization of uncertainty. Introduction Our main interest is in characterizing the uncertainty in reservoir description and reservoir performance predictions in order to optimize reservoir management. To do so, we wish to generate a suite of plausible reservoir models (realizations) that are consistent with all information and data. If the set of models is obtained by correctly sampling the pdf, then the set of models give a characterization of the uncertainty in the reservoir model. Thus, by predicting future reservoir performance with each of the realizations, and calculating statistics on the set of outcomes, one can evaluate the uncertainty in reservoir performance predictions.


Sign in / Sign up

Export Citation Format

Share Document