Compressed Dimension of Reservoir Models Uncertainty Parameters for Optimized Model Calibration and History Matching Process

2021 ◽  
Author(s):  
Ali Al-Turki ◽  
Obai Alnajjar ◽  
Majdi Baddourah ◽  
Babatunde Moriwawon

Abstract The algorithms and workflows have been developed to couple efficient model parameterization with stochastic, global optimization using a Multi-Objective Genetic Algorithm (MOGA) for global history matching, and coupled with an advanced workflow for streamline sensitivity-based inversion for fine-tuning. During parameterization the low-rank subsets of most influencing reservoir parameters are identified and propagated to MOGA to perform the field-level history match. Data misfits between the field historical data and simulation data are calculated with multiple realizations of reservoir models that quantify and capture reservoir uncertainty. Each generation of the optimization algorithms reduces the data misfit relative to the previous iteration. This iterative process continues until a satisfactory field-level history match is reached or there are no further improvements. The fine-tuning process of well-connectivity calibration is then performed with a streamlined sensitivity-based inversion algorithm to locally update the model to reduce well-level mismatch. In this study, an application of the proposed algorithms and workflow is demonstrated for model calibration and history matching. The synthetic reservoir model used in this study is discretized into millions of grid cells with hundreds of producer and injector wells. It is designed to generate several decades of production and injection history to evaluate and demonstrate the workflow. In field-level history matching, reservoir rock properties (e.g., permeability, fault transmissibility, etc.) are parameterized to conduct the global match of pressure and production rates. Grid Connectivity Transform (GCT) was used and assessed to parameterize the reservoir properties. In addition, the convergence rate and history match quality of MOGA was assessed during the field (global) history matching. Also, the effectiveness of the streamline-based inversion was evaluated by quantifying the additional improvement in history matching quality per well. The developed parametrization and optimization algorithms and workflows revealed the unique features of each of the algorithms for model calibration and history matching. This integrated workflow has successfully defined and carried uncertainty throughout the history matching process. Following the successful field-level history match, the well-level history matching was conducted using streamline sensitivity-based inversion, which further improved the history match quality and conditioned the model to historical production and injection data. In general, the workflow results in enhanced history match quality in a shorter turnaround time. The geological realism of the model is retained for robust prediction and development planning.

2021 ◽  
Author(s):  
Usman Aslam ◽  
Jorge Burgos ◽  
Craig Williams ◽  
Shawn McCloskey ◽  
James Cooper ◽  
...  

Abstract Reservoir production forecasts are inherently uncertain due to the lack of quality data available to build predictive reservoir models. Multiple data types, including historical production, well tests (RFT/PLT), and time-lapse seismic data, are assimilated into reservoir models during the history matching process to improve predictability of the model. Traditionally, a ‘best estimate’ for relative permeability data is assumed during the history matching process, despite there being significant uncertainty in the relative permeability. Relative permeability governs multiphase flow in the reservoir; therefore, it has significant importance in understanding the reservoir behavior as well as for model calibration and hence for reliable production forecasts. Performing sensitivities around the ‘best estimate’ relative permeability case will cover only part of the uncertainty space, with no indication of the confidence that may be placed on these forecasts. In this paper, we present an application of a Bayesian framework for uncertainty assessment and efficient history matching of a Permian CO2 EOR field for reliable production forecast. The study field has complex geology with over 65 years of historical data from primary recovery, waterflood, and CO2 injection. Relative permeability data from the field showed significant uncertainty, so we used uncertainties in the saturation endpoints as well as in the curvature of the relative permeability in multiple zones, by employing generalized Corey functions for relative permeability parameterization. Uncertainty in the relative permeability is used through a common platform integrator. An automated workflow generates the first set of relative permeability curves sampled from the prior distribution of saturation endpoints and Corey exponents, called ‘scoping runs’. These relative permeability curves are then passed to the reservoir simulator. The assumptions of uncertainties in the relative permeability data and other dynamic parameters are quickly validated by comparing the scoping runs and historical observations. By creating a mismatch or likelihood function, the Bayesian framework generates an ensemble of history matched models calibrated to the production data which can then be used for reliable probabilistic forecasting. Several iterations during the manual history match did not yield an acceptable solution, as uncertainty in the relative permeability was ignored. An application of the Bayesian inference accelerated by a proxy model found the relative permeability data to be one of the most influential parameters during the assisted history matching exercise. Incorporating the uncertainty in relative permeability data along with other dynamic parameters not only helped speed up the model calibration process, but also led to the identification of multiple history matched models. In addition, results show that the use of the Bayesian framework significantly reduced uncertainty in the most important dynamic parameters. The proposed approach allows incorporating previously ignored uncertainty in the relative permeability data in a systematic manner. The user-defined mismatch function increases the likelihood of obtaining an acceptable match and the weights in the mismatch function allow both the measurement uncertainty and the effect of simulation model inaccuracies. The Bayesian framework considers the whole uncertainty space and not just the history match region, leading to the identification of multiple history matched models.


2021 ◽  
Author(s):  
M. A. Borregales Reverón ◽  
H. H. Holm ◽  
O. Møyner ◽  
S. Krogstad ◽  
K.-A. Lie

Abstract The Ensemble Smoother with Multiple Data Assimilation (ES-MDA) method has been popular for petroleum reservoir history matching. However, the increasing inclusion of automatic differentiation in reservoir models opens the possibility to history-match models using gradient-based optimization. Here, we discuss, study, and compare ES-MDA and a gradient-based optimization for history-matching waterflooding models. We apply these two methods to history match reduced GPSNet-type models. To study the methods, we use an implementation of ES-MDA and a gradient-based optimization in the open-source MATLAB Reservoir Simulation Toolbox (MRST), and compare the methods in terms of history-matching quality and computational efficiency. We show complementary advantages of both ES-MDA and gradient-based optimization. ES-MDA is suitable when an exact gradient is not available and provides a satisfactory forecast of future production that often envelops the reference history data. On the other hand, gradient-based optimization is efficient if the exact gradient is available, as it then requires a low number of model evaluations. If the exact gradient is not available, using an approximate gradient or ES-MDA are good alternatives and give equivalent results in terms of computational cost and quality predictions.


2009 ◽  
Vol 12 (03) ◽  
pp. 446-454 ◽  
Author(s):  
Frode Georgsen ◽  
Anne R. Syversveen ◽  
Ragnar Hauge ◽  
Jan I. Tollefsrud ◽  
Morten Fismen

Summary The possibility of updating reservoir models with new well information is important for good reservoir management. The process of drilling a new well through to update of the static model and to history match the new model is often a time-consuming process. This paper presents new algorithms that allow the rapid updating of object-based facies models by further development of already existing models. An existing facies realization is adjusted to match new well observations by changing objects locally or adding/removing objects if required. Parts of the realization that are not influenced by the new wells are not changed. A local update of a specified region of the reservoir can be performed, leaving the rest of the reservoir unchanged or with minimum change because of new wells. In this method, the main focus is the algorithm implemented to fulfill well conditioning. The effect of this algorithm on different object models is presented through several case studies. These studies show how the local update consistently includes new information while leaving the rest of the realization unperturbed, thereby preserving the good history match. Introduction Rapid updating of static and dynamic reservoir models is important for reservoir management. Continual maintenance of history-matched models allows for right-time decisions to optimize the reservoir performance. The process of drilling a new well through to updating of the static model and history matching of the new model is often a time-consuming process. Static reservoir models and history matches are updated only intermittently, and there is typically a 1- to 2-year delay between the drilling of a new well and the generation of a reliable history-matched model that incorporates the new information. This paper presents new algorithms that allow rapid updating of static reservoir models when new wells are drilled. The static-model update is designed to keep as much of the existing history match as possible by locally adjusting the existing static model to the new well data. As the name implies, object models use a set of facies objects to generate a facies realization. Stochastic object-modeling algorithms have been developed to improve the representation of facies architectures in complex heterogeneous reservoirs and, thereby, to obtain more-realistic dynamic behavior of the reservoir models. We consider the main advantages of object models to be the ability to create geologically realistic facies elements (objects) and control the interaction between them, to correlate observations between wells (connectivity) explicitly, and the possibility of applying intraobject petrophysical trends.


SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 464-479 ◽  
Author(s):  
B. Todd Hoffman ◽  
Jef K. Caers ◽  
Xian-Huan Wen ◽  
Sebastien B. Strebelle

Summary This paper presents an innovative methodology to integrate prior geologic information, well-log data, seismic data, and production data into a consistent 3D reservoir model. Furthermore, the method is applied to a real channel reservoir from the African coast. The methodology relies on the probability-perturbation method (PPM). Perturbing probabilities rather than actual petrophysical properties guarantees that the conceptual geologic model is maintained and that any history-matching-related artifacts are avoided. Creating reservoir models that match all types of data are likely to have more prediction power than methods in which some data are not honored. The first part of the paper reviews the details of the PPM, and the next part of this paper describes the additional work that is required to history-match real reservoirs using this method. Then, a geological description of the reservoir case study is provided, and the procedure to build 3D reservoir models that are only conditioned to the static data is covered. Because of the character of the field, the channels are modeled with a multiple-point geostatistical method. The channel locations are perturbed in a manner such that the oil, water, and gas rates from the reservoir more accurately match the rates observed in the field. Two different geologic scenarios are used, and multiple history-matched models are generated for each scenario. The reservoir has been producing for approximately 5 years, but the models are matched only to the first 3 years of production. Afterward, to check predictive power, the matched models are run for the last 1½ years, and the results compare favorably with the field data. Introduction Reservoir models are constructed to better understand reservoir behavior and to better predict reservoir response. Economic decisions are often based on the predictions from reservoir models; therefore, such predictions need to be as accurate as possible. To achieve this goal, the reservoir model should honor all sources of data, including well-log, seismic, geologic information, and dynamic (production rate and pressure) data. Incorporating dynamic data into the reservoir model is generally known as history matching. History matching is difficult because it poses a nonlinear inverse problem in the sense that the relationship between the reservoir model parameters and the dynamic data is highly nonlinear and multiple solutions are avail- able. Therefore, history matching is often done with a trial-and-error method. In real-world applications of history matching, reservoir engineers manually modify an initial model provided by geoscientists until the production data are matched. The initial model is built based on geological and seismic data. While attempts are usually made to honor these other data as much as possible, often the history-matched models are unrealistic from a geological (and geophysical) point of view. For example, permeability is often altered to increase or decrease flow in areas where a mismatch is observed; however, the permeability alterations usually come in the form of box-shaped or pipe-shaped geometries centered around wells or between wells and tend to be devoid of any geologica. considerations. The primary focus lies in obtaining a history match.


2020 ◽  
Author(s):  
Konrad Wojnar ◽  
Jon S?trom ◽  
Tore Felix Munck ◽  
Martha Stunell ◽  
Stig Sviland-Østre ◽  
...  

Abstract The aim of the study was to create an ensemble of equiprobable models that could be used for improving the reservoir management of the Vilje field. Qualitative and quantitative workflows were developed to systematically and efficiently screen, analyze and history match an ensemble of reservoir simulation models to production and 4D seismic data. The goal of developing the workflows is to increase the utilization of data from 4D seismic surveys for reservoir characterization. The qualitative and quantitative workflows are presented, describing their benefits and challenges. The data conditioning produced a set of history matched reservoir models which could be used in the field development decision making process. The proposed workflows allowed for identification of outlying prior and posterior models based on key features where observed data was not covered by the synthetic 4D seismic realizations. As a result, suggestions for a more robust parameterization of the ensemble were made to improve data coverage. The existing history matching workflow efficiently integrated with the quantitative 4D seismic history matching workflow allowing for the conditioning of the reservoir models to production and 4D data. Thus, the predictability of the models was improved. This paper proposes a systematic and efficient workflow using ensemble-based methods to simultaneously screen, analyze and history match production and 4D seismic data. The proposed workflow improves the usability of 4D seismic data for reservoir characterization, and in turn, for the reservoir management and the decision-making processes.


2021 ◽  
Author(s):  
Giorgio Fighera ◽  
Ernesto Della Rossa ◽  
Patrizia Anastasi ◽  
Mohammed Amr Aly ◽  
Tiziano Diamanti

Abstract Improvements in reservoir simulation computational time thanks to GPU-based simulators and the increasing computational power of modern HPC systems, are paving the way for a massive employment of Ensemble History Matching (EHM) techniques which are intrinsically parallel. Here we present the results of a comparative study between a newly developed EHM tool that aims at leveraging the GPU parallelism, and a commercial third-party EHM software as a benchmark. Both are tested on a real case. The reservoir chosen for the comparison has a production history of 3 years with 15 wells between oil producers, and water and gas injectors. The EHM algorithm used is the Ensemble Smoother with Multiple Data Assimilations (ESMDA) and both tools have access to the same computational resources. The EHM problem was stated in the same way for both tools. The objective function considers well oil productions, water cuts, bottom-hole pressures, and gas-oil-ratios. Porosity and horizontal permeability are used as 3D grid parameters in the update algorithm, along with nine scalar parameters for anisotropy ratios, Corey exponents, and fault transmissibility multipliers. Both the presented tool and the benchmark obtained a satisfactory history match quality. The benchmark tool took around 11.2 hours to complete, while the proposed tool took only 1.5 hours. The two tools performed similar updates on the scalar parameters with only minor discrepancies. Updates on the 3D grid properties instead show significant local differences. The updated ensemble for the benchmark reached extreme values for porosity and permeability which are also distributed in a heterogeneous way. These distributions are quite unlikely in some model regions given the initial geological characterization of the reservoir. The updated ensemble for the presented tool did not reach extreme values in neither porosity nor permeability. The resulting property distributions are not so far off from the ones of the initial ensemble, therefore we can conclude that we were able to successfully update the ensemble while persevering the geological characterization of the reservoir. Analysis suggests that this discrepancy is due to the different way by which our EHM code consider inactive cells in the grid update calculations compared to the benchmark highlighting the fact that statistics including inactive cells should be carefully managed to correctly preserve the geological distribution represented in the initial ensemble. The presented EHM tool was developed from scratch to be fully parallel and to leverage on the abundantly available computational resources. Moreover, the ESMDA implementation was tweaked to improve the reservoir update by carefully managing inactive cells. A comparison against a benchmark showed that the proposed EHM tool achieved similar history match quality while improving the computation time and the geological realism of the updated ensemble.


SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 431-442 ◽  
Author(s):  
Xian-Huan Wen ◽  
Wen H. Chen

Summary The ensemble Kalman Filter technique (EnKF) has been reported to be very efficient for real-time updating of reservoir models to match the most current production data. Using EnKF, an ensemble of reservoir models assimilating the most current observations of production data is always available. Thus, the estimations of reservoir model parameters, and their associated uncertainty, as well as the forecasts are always up-to-date. In this paper, we apply the EnKF for continuously updating an ensemble of permeability models to match real-time multiphase production data. We improve the previous EnKF by adding a confirming option (i.e., the flow equations are re-solved from the previous assimilating step to the current step using the updated current permeability models). By doing so, we ensure that the updated static and dynamic parameters are always consistent with the flow equations at the current step. However, it also creates some inconsistency between the static and dynamic parameters at the previous step where the confirming starts. Nevertheless, we show that, with the confirming approach, the filter shows better performance for the particular example investigated. We also investigate the sensitivity of using a different number of realizations in the EnKF. Our results show that a relatively large number of realizations are needed to obtain stable results, particularly for the reliable assessment of uncertainty. The sensitivity of using different covariance functions is also investigated. The efficiency and robustness of the EnKF is demonstrated using an example. By assimilating more production data, new features of heterogeneity in the reservoir model can be revealed with reduced uncertainty, resulting in more accurate predictions of reservoir production. Introduction The reliability of reservoir models could increase as more data are included in their construction. Traditionally, static (hard and soft) data, such as geological, geophysical, and well log/core data are incorporated into reservoir geological models through conditional geostatistical simulation (Deutsch and Journel 1998). Dynamic production data, such as historical measurements of reservoir production, account for the majority of reservoir data collected during the production phase. These data are directly related to the recovery process and to the response variables that form the basis for reservoir management decisions. Incorporation of dynamic data is typically done through a history-matching process. Traditionally, history matching adjusts model variables (such as permeability, porosity, and transmissibility) so that the flow simulation results using the adjusted parameters match the observations. It usually requires repeated flow simulations. Both manual and (semi-) automatic history-matching processes are available in the industry (Chen et al. 1974; He et al. 1996; Landa and Horne 1997; Milliken and Emanuel 1998; Vasco et al. 1998; Wen et al. 1998a, 1998b; Roggero and Hu 1998; Agarwal and Blunt 2003; Caers 2003; Cheng et al. 2004). Automatic history matching is usually formulated in the form of a minimization problem in which the mismatch between measurements and computed values is minimized (Tarantola 1987; Sun 1994). Gradient-based methods are widely employed for such minimization problems, which require the computation of sensitivity coefficients (Li et al. 2003; Wen et al. 2003; Gao and Reynolds 2006). In the recent decade, automatic history matching has been a very active research area with significant progress reported (Cheng et al. 2004; Gao and Reynolds 2006; Wen et al. 1997). However, most approaches are either limited to small and simple reservoir models or are computationally too intensive for practical applications. Under the framework of traditional history matching, the assessment of uncertainty is usually through a repeated history-matching process with different initial models, which makes the process even more CPU-demanding. In addition, the traditional history-matching methods are not designed in such a fashion that allows for continuous model updating. When new production data are available and are required to be incorporated, the history-matching process has to be repeated using all measured data. These limit the efficiency and applicability of the traditional automatic history-matching techniques.


2021 ◽  
Author(s):  
Manish Kumar Choudhary ◽  
Gaurav Mahanti ◽  
Yogesh Rana ◽  
Sai Venkata Garimella ◽  
Arfan Ali ◽  
...  

Abstract Field X is one of largest oil fields in Brunei producing since 1970's. The field consists of a large faulted anticlinal structure of shallow marine Miocene sediments. The field has over 500 compartments and is produced under waterflood since 1980's through 400+ conduits over 50 platforms. A comprehensive review of water injection performance was attempted in 2019 to assess remaining oil and identify infill opportunities. Large uncertainties in reservoir properties, connectivity and fluid contacts required that data across multiple disciplines is integrated to identify new opportunities. It was recognized early on that integrated analysis of surveillance data and production history over 40 years will be critical for understanding field performance. Hence, reviews were first initiated using sand maps and analytical techniques. Tracer surveys, reservoir pressures, salinity measurements, Production Logging Tool (PLT) were all analyzed to understand waterflood progression and to define connectivity scenarios. A complete review of well logs, core data from over 30 wells and outcrop studies was carried out as part of modelling workflow. This understanding was used to construct a new facies-based static model. In parallel, key dynamic inputs like PVT analysis reports and special core analysis studies were analyzed to update dynamic modelling components. Prior to initiating the full field model history matching, a comprehensive impact analysis of the key dynamic uncertainties i.e., Production allocation, connectivity and varying aquifer strength etc. were conducted. An Assisted History Matching (AHM) workflow was attempted, which helped in identifying high impacting inputs which could be varied for history matching. Adjoint techniques were also used to identify other plausible geological scenarios. The integrated review helped in identifying over 50 new opportunities which potentially can increase recovery by over 10%. The new static model identified upsides in Stock Tank Oil Initially in Place (STOIIP) which if realized could further increase ultimate recoverable. The use of AHM assisted in reducing iterations and achieve multiple history matched models, which can be used to quantify forecast uncertainty. The new opportunities have helped to revitalize the mature field and has potential to almost increase the production by over 50%. A dedicated team is now maturing these opportunities. The robust methodology of integrating surveillance data with simulation modelling as described in this paper is generic and could be useful in current day brown field development practices to serve as an effective and economic manner for sustaining oil production and maximizing ultimate recovery. It is essential that all surveillance and production history data are well analyzed together prior to attempting any detailed modelling exercise. New models should then be constructed which confirm to the surveillance information and capture reservoir uncertainties. In large oil fields with long production history with allocation uncertainties, it is always a challenge for a quantitative assessment of History match quality and infill well Ultimate Recovery (UR) estimations. Hence a composite History Match Quality Indicator (HMQI) was designed with an appropriate weightage of rate, cumulative & reservoir pressure mismatch, water breakthrough timing delays. Then HMQI parameter spatial variation maps were made for different zones over the entire field for understanding and appropriately discounting each infill well oil recovery. Also, it is critical that facies variation is properly captured in models to better understand waterfront movements and locate remaining oil. Dynamic modelling of mature field with long production history can be quite challenging on its own and it is imperative that new numerical techniques are used to increase efficiency.


2021 ◽  
Author(s):  
Salahaldeen Alqallabi ◽  
Abdul Saboor Khan ◽  
Anish Phade ◽  
Mohamed Tarik Gacem ◽  
Mustapha Adli ◽  
...  

Abstract The aim of this study is to demonstrate the value of a fully integrated ensemble-based modeling approach for an onshore field in Abu Dhabi. Model uncertainties are included in both static and dynamic domains and valuable insights are achieved in record time of nine-weeks with very promising results. Workflows are established to honor the recommended static and dynamic modeling processes suited to the complexity of the field. Realistic sedimentological, structural and dynamic reservoir parameter uncertainties are identified and propagated to obtain realistic variability in the reservoir simulator response. These integrated workflows are used to generate an ensemble of equi-probable reservoir models. All realizations in the ensemble are then history-matched simultaneously before carrying out the production predictions using the entire ensemble. Analysis of the updates made during the history-matching process demonstrates valuable insights to the reservoir such as the presence of enhanced permeability streaks. These represent a challenge in the explicit modeling process due to the complex responses on the well log profiles. However, results analysis of the history matched ensemble shows that the location of high permeability updates generated by the history matching process is consistent with geological observations of enhanced permeability streaks in cores and the sequence stratigraphic framework. Additionally, post processing of available PLT data as a blind test show trends of fluid flow along horizontal wells are well captured, increasing confidence in the geologic consistency of the ensemble of models. This modeling approach provides an ensemble of history- matched reservoir models having an excellent match for both field and individual wells’ observed field production data. Furthermore, with the recommended modeling workflows, the generated models are geologically consistent and honor inherent correlations in the input data. Forecast of this ensemble of models enables realistic uncertainties in dynamic responses to be quantified, providing insights for informed reservoir management decisions and risk mitigation. Analysis of forecasted ensemble dynamic responses help evaluating performance of existing infill targets and delineate new infill targets while understanding the associated risks under both static and dynamic uncertainty. Repeatable workflows allow incorporation of new data in a robust manner and accelerates time from model building to decision making.


SPE Journal ◽  
2017 ◽  
Vol 22 (05) ◽  
pp. 1506-1518 ◽  
Author(s):  
Pedram Mahzari ◽  
Mehran Sohrabi

Summary Three-phase flow in porous media during water-alternating-gas (WAG) injections and the associated cycle-dependent hysteresis have been subject of studies experimentally and theoretically. In spite of attempts to develop models and simulation methods for WAG injections and three-phase flow, current lack of a solid approach to handle hysteresis effects in simulating WAG-injection scenarios has resulted in misinterpretations of simulation outcomes in laboratory and field scales. In this work, by use of our improved methodology, the first cycle of the WAG experiments (first waterflood and the subsequent gasflood) was history matched to estimate the two-phase krs (oil/water and gas/oil). For subsequent cycles, pertinent parameters of the WAG hysteresis model are included in the automatic-history-matching process to reproduce all WAG cycles together. The results indicate that history matching the whole WAG experiment would lead to a significantly improved simulation outcome, which highlights the importance of two elements in evaluating WAG experiments: inclusion of the full WAG experiments in history matching and use of a more-representative set of two-phase krs, which was originated from our new methodology to estimate two-phase krs from the first cycle of a WAG experiment. Because WAG-related parameters should be able to model any three-phase flow irrespective of WAG scenarios, in another exercise, the tuned parameters obtained from a WAG experiment (starting with water) were used in a similar coreflood test (WAG starting with gas) to assess predictive capability for simulating three-phase flow in porous media. After identifying shortcomings of existing models, an improved methodology was used to history match multiple coreflood experiments simultaneously to estimate parameters that can reasonably capture processes taking place in WAG at different scenarios—that is, starting with water or gas. The comprehensive simulation study performed here would shed some light on a consolidated methodology to estimate saturation functions that can simulate WAG injections at different scenarios.


Sign in / Sign up

Export Citation Format

Share Document