Fit-For-Purpose History Matching Approach for A Low-Quality Reservoir Under Waterflood: Integration of Uncertain Production Allocation

Author(s):  
M. Syafwan

This paper presents a fit-for-purpose approach to mitigate zonal production data allocation uncertainty during history matching of a reservoir simulation model due to limited production logging data. To avoid propagating perforation/production zone allocation uncertainty at commingled wells into the history matched reservoir model, only well-level production data from historical periods when production was from a single zone were used to calibrate reservoir properties that determine initial volumetric. Then, during periods of the history with commingled production, average reservoir pressure measurements were integrated into the model to allocate fluid production to the target reservoir. Last, the periods constrained by dedicated well-level fluid production and average reservoir pressure were merged over the forty-eight-year history to construct a single history matched reservoir model in preparation for waterflood performance forecasting. This innovative history matching approach, which mitigates the impacts of production allocation uncertainty by using different intervals of the historical data to calibrate model saturations and model pressures, has provided a new interpretation of OOIP and current recovery factor, as well as drive mechanisms including aquifer strength and capillary pressure. Fluid allocation from the target reservoir in the history matched model is 85% lower than previously estimated. The history matched model was used as a quantitative forecasting and optimization tool to expand the recent waterflood with improved production forecast reliability. The remaining mobile oil saturation map and streamline-based waterflood diagnostics have improved understanding of injector-producer connectivity and swept pore volumes, e.g., current swept volumes are minor and well-centric with limited indication of breakthrough at adjacent producers resulting in high remaining mobile oil saturation. Accordingly, the history matched model provides a foundation to select new injection points, determine dedicated producer locations and support optimized injection strategies to improve recovery.

SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 464-479 ◽  
Author(s):  
B. Todd Hoffman ◽  
Jef K. Caers ◽  
Xian-Huan Wen ◽  
Sebastien B. Strebelle

Summary This paper presents an innovative methodology to integrate prior geologic information, well-log data, seismic data, and production data into a consistent 3D reservoir model. Furthermore, the method is applied to a real channel reservoir from the African coast. The methodology relies on the probability-perturbation method (PPM). Perturbing probabilities rather than actual petrophysical properties guarantees that the conceptual geologic model is maintained and that any history-matching-related artifacts are avoided. Creating reservoir models that match all types of data are likely to have more prediction power than methods in which some data are not honored. The first part of the paper reviews the details of the PPM, and the next part of this paper describes the additional work that is required to history-match real reservoirs using this method. Then, a geological description of the reservoir case study is provided, and the procedure to build 3D reservoir models that are only conditioned to the static data is covered. Because of the character of the field, the channels are modeled with a multiple-point geostatistical method. The channel locations are perturbed in a manner such that the oil, water, and gas rates from the reservoir more accurately match the rates observed in the field. Two different geologic scenarios are used, and multiple history-matched models are generated for each scenario. The reservoir has been producing for approximately 5 years, but the models are matched only to the first 3 years of production. Afterward, to check predictive power, the matched models are run for the last 1½ years, and the results compare favorably with the field data. Introduction Reservoir models are constructed to better understand reservoir behavior and to better predict reservoir response. Economic decisions are often based on the predictions from reservoir models; therefore, such predictions need to be as accurate as possible. To achieve this goal, the reservoir model should honor all sources of data, including well-log, seismic, geologic information, and dynamic (production rate and pressure) data. Incorporating dynamic data into the reservoir model is generally known as history matching. History matching is difficult because it poses a nonlinear inverse problem in the sense that the relationship between the reservoir model parameters and the dynamic data is highly nonlinear and multiple solutions are avail- able. Therefore, history matching is often done with a trial-and-error method. In real-world applications of history matching, reservoir engineers manually modify an initial model provided by geoscientists until the production data are matched. The initial model is built based on geological and seismic data. While attempts are usually made to honor these other data as much as possible, often the history-matched models are unrealistic from a geological (and geophysical) point of view. For example, permeability is often altered to increase or decrease flow in areas where a mismatch is observed; however, the permeability alterations usually come in the form of box-shaped or pipe-shaped geometries centered around wells or between wells and tend to be devoid of any geologica. considerations. The primary focus lies in obtaining a history match.


2006 ◽  
Vol 9 (05) ◽  
pp. 502-512 ◽  
Author(s):  
Arne Skorstad ◽  
Odd Kolbjornsen ◽  
Asmund Drottning ◽  
Havar Gjoystdal ◽  
Olaf K. Huseby

Summary Elastic seismic inversion is a tool frequently used in analysis of seismic data. Elastic inversion relies on a simplified seismic model and generally produces 3D cubes for compressional-wave velocity, shear-wave velocity, and density. By applying rock-physics theory, such volumes may be interpreted in terms of lithology and fluid properties. Understanding the robustness of forward and inverse techniques is important when deciding the amount of information carried by seismic data. This paper suggests a simple method to update a reservoir characterization by comparing 4D-seismic data with flow simulations on an existing characterization conditioned on the base-survey data. The ability to use results from a 4D-seismic survey in reservoir characterization depends on several aspects. To investigate this, a loop that performs independent forward seismic modeling and elastic inversion at two time stages has been established. In the workflow, a synthetic reservoir is generated from which data are extracted. The task is to reconstruct the reservoir on the basis of these data. By working on a realistic synthetic reservoir, full knowledge of the reservoir characteristics is achieved. This makes the evaluation of the questions regarding the fundamental dependency between the seismic and petrophysical domains stronger. The synthetic reservoir is an ideal case, where properties are known to an accuracy never achieved in an applied situation. It can therefore be used to investigate the theoretical limitations of the information content in the seismic data. The deviations in water and oil production between the reference and predicted reservoir were significantly decreased by use of 4D-seismic data in addition to the 3D inverted elastic parameters. Introduction It is well known that the information in seismic data is limited by the bandwidth of the seismic signal. 4D seismics give information on the changes between base and monitor surveys and are consequently an important source of information regarding the principal flow in a reservoir. Because of its limited resolution, the presence of a thin thief zone can be observed only as a consequence of flow, and the exact location will not be found directly. This paper addresses the question of how much information there is in the seismic data, and how this information can be used to update the model for petrophysical reservoir parameters. Several methods for incorporating 4D-seismic data in the reservoir-characterization workflow for improving history matching have been proposed earlier. The 4D-seismic data and the corresponding production data are not on the same scale, but they need to be combined. Huang et al. (1997) proposed a simulated annealing method for conditioning these data, while Lumley and Behrens (1997) describe a workflow loop in which the 4D-seismic data are compared with those computed from the reservoir model. Gosselin et al. (2003) give a short overview of the use of 4D-seismic data in reservoir characterization and propose using gradient-based methods for history matching the reservoir model on seismic and production data. Vasco et al. (2004) show that 4D data contain information of large-scale reservoir-permeability variations, and they illustrate this in a Gulf of Mexico example.


2008 ◽  
Vol 2008 ◽  
pp. 1-13 ◽  
Author(s):  
Tina Yu ◽  
Dave Wilkinson ◽  
Alexandre Castellini

Reservoir modeling is a critical step in the planning and development of oil fields. Before a reservoir model can be accepted for forecasting future production, the model has to be updated with historical production data. This process is called history matching. History matching requires computer flow simulation, which is very time-consuming. As a result, only a small number of simulation runs are conducted and the history-matching results are normally unsatisfactory. This is particularly evident when the reservoir has a long production history and the quality of production data is poor. The inadequacy of the history-matching results frequently leads to high uncertainty of production forecasting. To enhance the quality of the history-matching results and improve the confidence of production forecasts, we introduce a methodology using genetic programming (GP) to construct proxies for reservoir simulators. Acting as surrogates for the computer simulators, the “cheap” GP proxies can evaluate a large number (millions) of reservoir models within a very short time frame. With such a large sampling size, the reservoir history-matching results are more informative and the production forecasts are more reliable than those based on a small number of simulation models. We have developed a workflow which incorporates the two GP proxies into the history matching and production forecast process. Additionally, we conducted a case study to demonstrate the effectiveness of this approach. The study has revealed useful reservoir information and delivered more reliable production forecasts. All of these were accomplished without introducing new computer simulation runs.


SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 431-442 ◽  
Author(s):  
Xian-Huan Wen ◽  
Wen H. Chen

Summary The ensemble Kalman Filter technique (EnKF) has been reported to be very efficient for real-time updating of reservoir models to match the most current production data. Using EnKF, an ensemble of reservoir models assimilating the most current observations of production data is always available. Thus, the estimations of reservoir model parameters, and their associated uncertainty, as well as the forecasts are always up-to-date. In this paper, we apply the EnKF for continuously updating an ensemble of permeability models to match real-time multiphase production data. We improve the previous EnKF by adding a confirming option (i.e., the flow equations are re-solved from the previous assimilating step to the current step using the updated current permeability models). By doing so, we ensure that the updated static and dynamic parameters are always consistent with the flow equations at the current step. However, it also creates some inconsistency between the static and dynamic parameters at the previous step where the confirming starts. Nevertheless, we show that, with the confirming approach, the filter shows better performance for the particular example investigated. We also investigate the sensitivity of using a different number of realizations in the EnKF. Our results show that a relatively large number of realizations are needed to obtain stable results, particularly for the reliable assessment of uncertainty. The sensitivity of using different covariance functions is also investigated. The efficiency and robustness of the EnKF is demonstrated using an example. By assimilating more production data, new features of heterogeneity in the reservoir model can be revealed with reduced uncertainty, resulting in more accurate predictions of reservoir production. Introduction The reliability of reservoir models could increase as more data are included in their construction. Traditionally, static (hard and soft) data, such as geological, geophysical, and well log/core data are incorporated into reservoir geological models through conditional geostatistical simulation (Deutsch and Journel 1998). Dynamic production data, such as historical measurements of reservoir production, account for the majority of reservoir data collected during the production phase. These data are directly related to the recovery process and to the response variables that form the basis for reservoir management decisions. Incorporation of dynamic data is typically done through a history-matching process. Traditionally, history matching adjusts model variables (such as permeability, porosity, and transmissibility) so that the flow simulation results using the adjusted parameters match the observations. It usually requires repeated flow simulations. Both manual and (semi-) automatic history-matching processes are available in the industry (Chen et al. 1974; He et al. 1996; Landa and Horne 1997; Milliken and Emanuel 1998; Vasco et al. 1998; Wen et al. 1998a, 1998b; Roggero and Hu 1998; Agarwal and Blunt 2003; Caers 2003; Cheng et al. 2004). Automatic history matching is usually formulated in the form of a minimization problem in which the mismatch between measurements and computed values is minimized (Tarantola 1987; Sun 1994). Gradient-based methods are widely employed for such minimization problems, which require the computation of sensitivity coefficients (Li et al. 2003; Wen et al. 2003; Gao and Reynolds 2006). In the recent decade, automatic history matching has been a very active research area with significant progress reported (Cheng et al. 2004; Gao and Reynolds 2006; Wen et al. 1997). However, most approaches are either limited to small and simple reservoir models or are computationally too intensive for practical applications. Under the framework of traditional history matching, the assessment of uncertainty is usually through a repeated history-matching process with different initial models, which makes the process even more CPU-demanding. In addition, the traditional history-matching methods are not designed in such a fashion that allows for continuous model updating. When new production data are available and are required to be incorporated, the history-matching process has to be repeated using all measured data. These limit the efficiency and applicability of the traditional automatic history-matching techniques.


2007 ◽  
Vol 10 (03) ◽  
pp. 233-240 ◽  
Author(s):  
Alberto Cominelli ◽  
Fabrizio Ferdinandi ◽  
Pierre Claude de Montleau ◽  
Roberto Rossi

Summary Reservoir management is based on the prediction of reservoir performance by means of numerical-simulation models. Reliable predictions require that the numerical model mimic the production history. Therefore, the numerical model is modified to match the production data. This process is termed history matching (HM). Form a mathematical viewpoint, HM is an optimization problem, where the target is to minimize an objective function quantifying the misfit between observed and simulated production data. One of the main problems in HM is the choice of an effective parameterization—a set of reservoir properties that can be plausibly altered to get a history-matched model. This issue is known as a parameter-identification problem, and its solution usually represents a significant step in HM projects. In this paper, we propose a practical implementation of a multiscale approach aimed at identifying effective parameterizations in real-life HM problems. The approach requires the availability of gradient simulators capable of providing the user with derivatives of the objective function with respect to the parameters at hand. Objective-function derivatives can then be used in a multiscale setting to define a sequence of richer and richer parameterizations. At each step of the sequence, the matching of the production data is improved by means of a gradient-based optimization. The methodology was validated on a synthetic case and was applied to history match the simulation model of a North Sea oil reservoir. The proposed methodology can be considered a practical solution for parameter-identification problems in many real cases until sound methodologies (primarily adaptive multiscale estimation of parameters) become available in commercial software programs. Introduction Predictions of reservoir behavior require the definition of subsurface properties at the scale of the simulation grid cells. At this scale, a reliable description of the porous media requires us to build a reservoir model by integrating all the available sources of data. By their nature, we can categorize the data as prior and production data. Prior data can be seen as "direct" measures or representations of the reservoir properties. Production data include flow measures collected at wells [e.g., water cut, gas/oil ratio (GOR) and shut-in pressure, and time-lapse seismic data]. Prior data are directly incorporated in the setup of the reservoir model, typically in the framework of well-established reservoir-characterization workflows.


2021 ◽  
Author(s):  
Usman Aslam ◽  
Luis Hernando Perez Cardenas ◽  
Andrey Klimushin

Abstract The Internet of Things has popularized the notion of a digital twin - a virtual representation of a physical system. There are substantial risks associated with designing a development plan for an oilfield and the industry has been making use of reservoir models - digital twins - to improve the decision-making process for many years. With an increase in the availability of computational resources, the industry is moving towards ensemble-based workflows to estimate risk in field development plans. In this paper, we demonstrate the use of an integrated ensemble-based approach to assess uncertainties in the reservoir models and quantify their impact on the decision-making process. An important feature of a digital twin is its ability to use sensor data to update the virtual model, more commonly known as history matching or data assimilation. We demonstrate how production data can be used to identify and constrain the uncertainties in the reservoir model. Production data is incorporated using Bayesian statistics and state-of-the-art supervised machine learning techniques to create an ensemble of models that capture the range of uncertainties in the reservoir model. This ensemble of calibrated models with an improved predictive ability provides a realistic assessment of the uncertainty associated with production forecasts. The ensemble-based approach is demonstrated through its application on an offshore oilfield located in the North Sea. The field is highly compartmentalized and has high structural uncertainty following the interpretation and depth conversion. An integrated cross-domain model is set up to incorporate typically ignored structural uncertainty in addition to the uncertainties and their dependencies in the dynamic parameters, including fault transmissibility, pore-volume, fluid contacts, saturation, and relative permeability endpoints, etc. Results from the history matched ensemble of models show a significa nt reduction in uncertainty in these parameters and the predicted production. An advantage of the proposed technique is that the automated, repeatable, and auditable ensemble-based workflow can assimilate the newly acquired measured data into the reservoir model at any time, keeping the model up-to-date and evergreen.


Energies ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 1052
Author(s):  
Baozhong Wang ◽  
Jyotsna Sharma ◽  
Jianhua Chen ◽  
Patricia Persaud

Estimation of fluid saturation is an important step in dynamic reservoir characterization. Machine learning techniques have been increasingly used in recent years for reservoir saturation prediction workflows. However, most of these studies require input parameters derived from cores, petrophysical logs, or seismic data, which may not always be readily available. Additionally, very few studies incorporate the production data, which is an important reflection of the dynamic reservoir properties and also typically the most frequently and reliably measured quantity throughout the life of a field. In this research, the random forest ensemble machine learning algorithm is implemented that uses the field-wide production and injection data (both measured at the surface) as the only input parameters to predict the time-lapse oil saturation profiles at well locations. The algorithm is optimized using feature selection based on feature importance score and Pearson correlation coefficient, in combination with geophysical domain-knowledge. The workflow is demonstrated using the actual field data from a structurally complex, heterogeneous, and heavily faulted offshore reservoir. The random forest model captures the trends from three and a half years of historical field production, injection, and simulated saturation data to predict future time-lapse oil saturation profiles at four deviated well locations with over 90% R-square, less than 6% Root Mean Square Error, and less than 7% Mean Absolute Percentage Error, in each case.


2013 ◽  
Vol 28 (04) ◽  
pp. 369-375 ◽  
Author(s):  
Oscar Vazquez ◽  
Ross A. McCartney ◽  
Eric Mackay

2021 ◽  
Author(s):  
Boxiao Li ◽  
Hemant Phale ◽  
Yanfen Zhang ◽  
Timothy Tokar ◽  
Xian-Huan Wen

Abstract Design of Experiments (DoE) is one of the most commonly employed techniques in the petroleum industry for Assisted History Matching (AHM) and uncertainty analysis of reservoir production forecasts. Although conceptually straightforward, DoE is often misused by practitioners because many of its statistical and modeling principles are not carefully followed. Our earlier paper (Li et al. 2019) detailed the best practices in DoE-based AHM for brownfields. However, to our best knowledge, there is a lack of studies that summarize the common caveats and pitfalls in DoE-based production forecast uncertainty analysis for greenfields and history-matched brownfields. Our objective here is to summarize these caveats and pitfalls to help practitioners apply the correct principles for DoE-based production forecast uncertainty analysis. Over 60 common pitfalls in all stages of a DoE workflow are summarized. Special attention is paid to the following critical project transitions: (1) the transition from static earth modeling to dynamic reservoir simulation; (2) from AHM to production forecast; and (3) from analyzing subsurface uncertainties to analyzing field-development alternatives. Most pitfalls can be avoided by consistently following the statistical and modeling principles. Some pitfalls, however, can trap experienced engineers. For example, mistakes made in handling the three abovementioned transitions can yield strongly unreliable proxy and sensitivity analysis. For the representative examples we study, they can lead to having a proxy R2 of less than 0.2 versus larger than 0.9 if done correctly. Two improved experimental designs are created to resolve this challenge. Besides the technical pitfalls that are avoidable via robust statistical workflows, we also highlight the often more severe non-technical pitfalls that cannot be evaluated by measures like R2. Thoughts are shared on how they can be avoided, especially during project framing and the three critical transition scenarios.


Sign in / Sign up

Export Citation Format

Share Document