Fast History Matching of Finite-Difference Models Using Streamline-Based Sensitivities

2005 ◽  
Vol 8 (05) ◽  
pp. 426-436 ◽  
Author(s):  
Hao Cheng ◽  
Arun Kharghoria ◽  
Zhong He ◽  
Akhil Datta-Gupta

Summary We propose a novel approach to history matching finite-difference models that combines the advantages of streamline models with the versatility of finite-difference simulation. Current streamline models are limited in their ability to incorporate complex physical processes and cross-streamline mechanisms in a computationally efficient manner. A unique feature of streamline models is their ability to analytically compute the sensitivity of the production data with respect to reservoir parameters using a single flow simulation. These sensitivities define the relationship between changes in production response because of small changes in reservoir parameters and, thus, form the basis for many history-matching algorithms. In our approach, we use the streamline-derived sensitivities to facilitate history matching during finite-difference simulation. First, the velocity field from the finite-difference model is used to compute streamline trajectories, time of flight, and parameter sensitivities. The sensitivities are then used in an inversion algorithm to update the reservoir model during finite-difference simulation. The use of a finite-difference model allows us to account for detailed process physics and compressibility effects. Although the streamline-derived sensitivities are only approximate, they do not seem to noticeably impact the quality of the match or the efficiency of the approach. For history matching, we use a generalized travel-time inversion (GTTI) that is shown to be robust because of its quasilinear properties and that converges in only a few iterations. The approach is very fast and avoids many of the subjective judgments and time-consuming trial-and-error steps associated with manual history matching. We demonstrate the power and utility of our approach with a synthetic example and two field examples. The first one is from a CO2 pilot area in the Goldsmith San Andreas Unit (GSAU), a dolomite formation in west Texas with more than 20 years of waterflood production history. The second example is from a Middle Eastern reservoir and involves history matching a multimillion-cell geologic model with 16 injectors and 70 producers. The final model preserved all of the prior geologic constraints while matching 30 years of production history. Introduction Geological models derived from static data alone often fail to reproduce the field production history. Reconciling geologic models to the dynamic response of the reservoir is critical to building reliable reservoir models. Classical history-matching procedures whereby reservoir parameters are adjusted manually by trial and error can be tedious and often yield a reservoir description that may not be realistic or consistent with the geologic interpretation. In recent years, several techniques have been developed for integrating production data into reservoir models. Integration of dynamic data typically requires a least-squares-based minimization to match the observed and calculated production response. There are several approaches to such minimization, and these can be classified broadly into three categories: gradient-based methods, sensitivity-based methods, and derivative-free methods. The derivative-free approaches, such as simulated annealing or genetic algorithms, require numerous flow simulations and can be computationally prohibitive for field-scale applications. Gradient-based methods have been used widely for automatic history matching, although the convergence rates of these methods are typically slower than the sensitivity-based methods such as the Gauss-Newton or the LSQR method. An integral part of the sensitivity-based methods is the computation of sensitivity coefficients. These sensitivities are simply partial derivatives that define the change in production response because of small changes in reservoir parameters. There are several approaches to calculating sensitivity coefficients, and these generally fall into one of three categories: perturbation method, direct method, and adjoint-state methods. Conceptually, the perturbation approach is the simplest and requires the fewest changes in an existing code. Sensitivities are estimated simply by perturbing the model parameters one at a time by a small amount and then computing the corresponding production response. This approach requires (N+1) forward simulations, where N is the number of parameters. Obviously, it can be computationally prohibitive for reservoir models with many parameters. In the direct or sensitivity equation method, the flow and transport equations are differentiated to obtain expressions for the sensitivity coefficients. Because there is one equation for each parameter, this approach requires the same amount of work. A variation of this method, called the gradient simulator method, uses the discretized version of the flow equations and takes advantage of the fact that the coefficient matrix remains unchanged for all the parameters and needs to be decomposed only once. Thus, sensitivity computation for each parameter now requires a matrix/vector multiplication. This method can also be computationally expensive for a large number of parameters. Finally, the adjoint-state method requires derivation and solution of adjoint equations that can be quite cumbersome for multiphase-flow applications. Furthermore, the number of adjoint solutions will generally depend on the amount of production data and, thus, the length of the production history.

SPE Journal ◽  
2007 ◽  
Vol 12 (04) ◽  
pp. 475-485 ◽  
Author(s):  
Hao Cheng ◽  
Adedayo Stephen Oyerinde ◽  
Akhil Datta-Gupta ◽  
William J. Milliken

Summary Reconciling high-resolution geologic models to field production history is still by far the most time-consuming aspect of the workflow for both geoscientists and engineers. Recently, streamline-based assisted and automatic history-matching techniques have shown great potential in this regard, and several field applications have demonstrated the feasibility of the approach. However, most of these applications have been limited to two-phase water/oil flow under incompressible or slightly compressible conditions. We propose an approach to history matching three-phase flow using a novel compressible streamline formulation and streamline-derived analytic sensitivities. First, we use a generalized streamline model to account for compressible flow by introducing an "effective density" of total fluids along streamlines. This density term rigorously captures changes in fluid volumes with pressure and is easily traced along streamlines. A density-dependent source term in the saturation equation further accounts for the pressure effects during saturation calculations along streamlines. Our approach preserves the 1D nature of the saturation equation and all the associated advantages of the streamline approach with only minor modifications to existing streamline models. Second, we analytically compute parameter sensitivities that define the relationship between the reservoir properties and the production response, viz. water-cut and gas/oil ratio (GOR). These sensitivities are an integral part of history matching, and streamline models permit efficient computation of these sensitivities through a single flow simulation. Finally, for history matching, we use a generalized travel-time inversion that has been shown to be robust because of its quasilinear properties and converges in only a few iterations. The approach is very fast and avoids much of the subjective judgment and time-consuming trial-and-error inherent in manual history matching. We demonstrate the power and utility of our approach using both synthetic and field-scale examples. The synthetic case is used to validate our method. It entails the joint integration of water cut and gas/oil ratios (GORs) from a nine-spot pattern in reconstructing a reference permeability field. The field-scale example is a modified version of the ninth SPE comparative study and consists of 25 producers, 1 injector, and aquifer influx. Starting with a prior geologic model, we integrate water-cut and GOR history by the generalized travel-time inversion. Our approach is very fast and preserves the geologic continuity. Introduction Integration of production data typically requires the minimization of a predefined data misfit and penalty terms to match the observed and calculated production response (Oliver 1994; Vasco et al. 1999; Datta-Gupta et al. 2001; Reis et al. 2000; Landa et al. 1996; Anterion et al. 1989; Wu et al. 1999; Wang and Kovscek 2000; Sahni and Horne 2005). There are several approaches to such minimization, and these can be broadly classified into three categories: gradient-based methods, sensitivity-based methods, and derivative-free methods (Oliver 1994). The derivative-free approaches such as simulated annealing and genetic algorithm require numerous flow simulations and can be computationally prohibitive for field-scale applications with very large numbers of parameters. Gradient-based methods have been widely used for automatic history matching, although the rate of convergence of these methods is typically slower than that of the sensitivity-based methods, such as the Gauss-Newton or the LSQR method (Vega et al. 2004). An integral part of the sensitivity-based methods is the computation of sensitivity coefficients. There are several approaches to calculating sensitivity coefficients, and these generally fall into one of the three following categories: perturbation method, direct method, and adjoint state methods. The perturbation approach is the simplest and requires the fewest changes to an existing code. This approach requires (N+1) forward simulations, where N is the number of parameters. Obviously, this can be computationally prohibitive for reservoir models with many parameters. In the direct, or sensitivity-equation, method, the flow and transport equations are differentiated to obtain expressions for the sensitivity coefficients (Vasco et al. 1999). Because there is one equation for each parameter, this approach can require the same amount of work. A variation of this method, called the gradient simulator method, utilizes the discretized version of the flow equations and takes advantage of the fact that the coefficient matrix remains unchanged for all parameters and needs to be decomposed only once (Anterion et al. 1989). Thus, sensitivity computation for each parameter now requires a matrix-vector multiplication. This method obviously represents a significant improvement, but still can be computationally demanding for large number of parameters. Finally, the adjoint-state method requires derivation and solution of adjoint equations that can be significantly smaller in number compared to the sensitivity equations. The adjoint equations are obtained by minimizing the production data misfit with flow equations as constraint, and the implementation of the method can be quite complex and cumbersome for multiphase flow applications (Wu et al. 1999). Furthermore, the number of adjoint solutions will generally depend on the amount of production data and thus can be restrictive for field-scale applications.


2021 ◽  
Author(s):  
M. A. Borregales Reverón ◽  
H. H. Holm ◽  
O. Møyner ◽  
S. Krogstad ◽  
K.-A. Lie

Abstract The Ensemble Smoother with Multiple Data Assimilation (ES-MDA) method has been popular for petroleum reservoir history matching. However, the increasing inclusion of automatic differentiation in reservoir models opens the possibility to history-match models using gradient-based optimization. Here, we discuss, study, and compare ES-MDA and a gradient-based optimization for history-matching waterflooding models. We apply these two methods to history match reduced GPSNet-type models. To study the methods, we use an implementation of ES-MDA and a gradient-based optimization in the open-source MATLAB Reservoir Simulation Toolbox (MRST), and compare the methods in terms of history-matching quality and computational efficiency. We show complementary advantages of both ES-MDA and gradient-based optimization. ES-MDA is suitable when an exact gradient is not available and provides a satisfactory forecast of future production that often envelops the reference history data. On the other hand, gradient-based optimization is efficient if the exact gradient is available, as it then requires a low number of model evaluations. If the exact gradient is not available, using an approximate gradient or ES-MDA are good alternatives and give equivalent results in terms of computational cost and quality predictions.


SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 464-479 ◽  
Author(s):  
B. Todd Hoffman ◽  
Jef K. Caers ◽  
Xian-Huan Wen ◽  
Sebastien B. Strebelle

Summary This paper presents an innovative methodology to integrate prior geologic information, well-log data, seismic data, and production data into a consistent 3D reservoir model. Furthermore, the method is applied to a real channel reservoir from the African coast. The methodology relies on the probability-perturbation method (PPM). Perturbing probabilities rather than actual petrophysical properties guarantees that the conceptual geologic model is maintained and that any history-matching-related artifacts are avoided. Creating reservoir models that match all types of data are likely to have more prediction power than methods in which some data are not honored. The first part of the paper reviews the details of the PPM, and the next part of this paper describes the additional work that is required to history-match real reservoirs using this method. Then, a geological description of the reservoir case study is provided, and the procedure to build 3D reservoir models that are only conditioned to the static data is covered. Because of the character of the field, the channels are modeled with a multiple-point geostatistical method. The channel locations are perturbed in a manner such that the oil, water, and gas rates from the reservoir more accurately match the rates observed in the field. Two different geologic scenarios are used, and multiple history-matched models are generated for each scenario. The reservoir has been producing for approximately 5 years, but the models are matched only to the first 3 years of production. Afterward, to check predictive power, the matched models are run for the last 1½ years, and the results compare favorably with the field data. Introduction Reservoir models are constructed to better understand reservoir behavior and to better predict reservoir response. Economic decisions are often based on the predictions from reservoir models; therefore, such predictions need to be as accurate as possible. To achieve this goal, the reservoir model should honor all sources of data, including well-log, seismic, geologic information, and dynamic (production rate and pressure) data. Incorporating dynamic data into the reservoir model is generally known as history matching. History matching is difficult because it poses a nonlinear inverse problem in the sense that the relationship between the reservoir model parameters and the dynamic data is highly nonlinear and multiple solutions are avail- able. Therefore, history matching is often done with a trial-and-error method. In real-world applications of history matching, reservoir engineers manually modify an initial model provided by geoscientists until the production data are matched. The initial model is built based on geological and seismic data. While attempts are usually made to honor these other data as much as possible, often the history-matched models are unrealistic from a geological (and geophysical) point of view. For example, permeability is often altered to increase or decrease flow in areas where a mismatch is observed; however, the permeability alterations usually come in the form of box-shaped or pipe-shaped geometries centered around wells or between wells and tend to be devoid of any geologica. considerations. The primary focus lies in obtaining a history match.


SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 418-430 ◽  
Author(s):  
Karl D. Stephen ◽  
Juan Soldo ◽  
Colin Macbeth ◽  
Mike A. Christie

Summary Time-lapse (or 4D) seismic is increasingly being used as a qualitative description of reservoir behavior for management and decision-making purposes. When combined quantitatively with geological and flow modeling as part of history matching, improved predictions of reservoir production can be obtained. Here, we apply a method of multiple-model history matching based on simultaneous comparison of spatial data offered by seismic as well as individual well-production data. Using a petroelastic transform and suitable rescaling, forward-modeled simulations are converted into predictions of seismic impedance attributes and compared to observed data by calculation of a misfit. A similar approach is applied to dynamic well data. This approach improves on gradient-based methods by avoiding entrapment in local minima. We demonstrate the method by applying it to the UKCS Schiehallion reservoir, updating the operator's model. We consider a number of parameters to be uncertain. The reservoir's net to gross is initially updated to better match the observed baseline acoustic impedance derived from the RMS amplitudes of the migrated stack. We then history match simultaneously for permeability, fault transmissibility multipliers, and the petroelastic transform parameters. Our results show a good match to the observed seismic and well data with significant improvement to the base case. Introduction Reservoir management requires tools such as simulation models to predict asset behavior. History matching is often employed to alter these models so that they compare favorably to observed well rates and pressures. This well information is obtained at discrete locations and thus lacks the areal coverage necessary to accurately constrain dynamic reservoir parameters such as permeability and the location and effect of faults. Time-lapse seismic captures the effect of pressure and saturation on seismic impedance attributes, giving 2D maps or 3D volumes of the missing information. The process of seismic history matching attempts to overlap the benefits of both types of information to improve estimates of the reservoir model parameters. We first present an automated multiple-model history-matching method that includes time-lapse seismic along with production data, based on an integrated workflow (Fig. 1). It improves on the classical approach, wherein the engineer manually adjusts parameters in the simulation model. Our method also improves on gradient-based methods, such as Steepest Descent, Gauss-Newton, and Levenberg-Marquardt algorithms (e.g., Lépine et al. 1999;Dong and Oliver 2003; Gosselin et al. 2003; Mezghani et al. 2004), which are good at finding local likelihood maxima but can fail to find the global maximum. Our method is also faster than stochastic methods such as genetic algorithms and simulated annealing, which often require more simulations and may have slower convergence rates. Finally, multiple models are generated, enabling posterior uncertainty analysis in a Bayesian framework (as in Stephen and MacBeth 2006a).


2020 ◽  
Vol 496 (1) ◽  
pp. 199-207 ◽  
Author(s):  
Tor Anders Knai ◽  
Guillaume Lescoffit

AbstractFaults are known to affect the way that fluids can flow in clastic oil and gas reservoirs. Fault barriers either stop fluids from passing across or they restrict and direct the fluid flow, creating static or dynamic reservoir compartments. Representing the effect of these barriers in reservoir models is key to establishing optimal plans for reservoir drainage, field development and production.Fault property modelling is challenging, however, as observations of faults in nature show a rapid and unpredictable variation in fault rock content and architecture. Fault representation in reservoir models will necessarily be a simplification, and it is important that the uncertainty ranges are captured in the input parameters. History matching also requires flexibility in order to handle a wide variety of data and observations.The Juxtaposition Table Method is a new technique that efficiently handles all relevant geological and production data in fault property modelling. The method provides a common interface that is easy to relate to for all petroleum technology disciplines, and allows a close cooperation between the geologist and reservoir engineer in the process of matching the reservoir model to observed production behaviour. Consequently, the method is well suited to handling fault property modelling in the complete life cycle of oil and gas fields, starting with geological predictions and incorporating knowledge of dynamic reservoir behaviour as production data become available.


SPE Journal ◽  
2007 ◽  
Vol 12 (04) ◽  
pp. 408-419 ◽  
Author(s):  
Baoyan Li ◽  
Francois Friedmann

Summary History matching is an inverse problem in which an engineer calibrates key geological/fluid flow parameters by fitting a simulator's output to the real reservoir production history. It has no unique solution because of insufficient constraints. History-match solutions are obtained by searching for minima of an objective function below a preselected threshold value. Experimental design and response surface methodologies provide an efficient approach to build proxies of objective functions (OF) for history matching. The search for minima can then be easily performed on the proxies of OF as long as its accuracy is acceptable. In this paper, we first introduce a novel experimental design methodology for semi-automatically selecting the sampling points, which are used to improve the accuracy of constructed proxies of the nonlinear OF. This method is based on derivatives of constructed proxies. We propose an iterative procedure for history matching, applying this new design methodology. To obtain the global optima, the proxies of an objective function are initially constructed on the global parameter space. They are iteratively improved until adequate accuracy is achieved. We locate subspaces in the vicinity of the optima regions using a clustering technique to improve the accuracy of the reconstructed OF in these subspaces. We test this novel methodology and history-matching procedure with two waterflooded reservoir models. One model is the Imperial College fault model (Tavassoli et al. 2004). It contains a large bank of simulation runs. The other is a modified version of SPE9 (Killough 1995) benchmark problem. We demonstrate the efficiency of this newly developed history-matching technique. Introduction History matching (Eide et al. 1994; Landa and Güyagüler 2003) is an inverse problem in which an engineer calibrates key geological/fluid flow parameters of reservoirs by fitting a reservoir simulator's output to the real reservoir production history. It has no unique solution because of insufficient constraints. The traditional history matching is performed in a semi-empirical approach, which is based on the engineer's understanding of the field production behavior. Usually, the model parameters are adjusted using a one-factor-at-a-time approach. History matching can be very time consuming, because many simulation runs may be required for obtaining good fitting results. Attempts have been made to automate the history-matching process by using optimal control theory (Chen et al. 1974) and gradient techniques (Gomez et al. 2001). Also, design of experiment (DOE) and response surface methodologies (Eide et al. 1994; Box and Wilson 1987; Montgomery 2001; Box and Hunter 1957; Box and Wilson 1951; Damsleth et al. 1992; Egeland et al. 1992; Friedmann et al. 2003) (RSM) were introduced in the late 1990s to guide automatic history matching. The goal of these automatic methods is to achieve reasonably faster history-matching techniques than the traditional method. History matching is an optimization problem. The objective is to find the best of all possible sets of geological/fluid flow parameters to fit the production data of reservoirs. To assess the quality of the match, we define an OF (Atallah 1999). For history-matching problems, an objective function is usually defined as a distance (Landa and Güyagüler 2003) between a simulator's output and reservoir production data. History-matching solutions are obtained by searching for minima of the objective function. Experimental design and response surface methodologies provide an efficient approach to build up hypersurfaces (Kecman 2001) of objective functions (i.e., proxies of objective functions with a limited number of simulation runs for history matching). The search for minima can then be easily performed on these proxies as long as their accuracy is acceptable. The efficiency of this technique depends on constructing adequately accurate objective functions.


SPE Journal ◽  
2014 ◽  
Vol 19 (05) ◽  
pp. 873-890 ◽  
Author(s):  
Xia Yan ◽  
Albert C. Reynolds

Summary Optimization algorithms that incorporate a stochastic gradient [such as simultaneous-perturbation stochastic approximation (SPSA), simplex, and EnOpt] are easy to implement in conjunction with any reservoir simulator. However, for realistic problems, a stochastic gradient provides only a rough approximation of the true gradient, and, in particular, the angle between a stochastic gradient and the associated true gradient is typically far from zero even though a properly computed stochastic gradient usually represents an uphill direction. This paper develops a more robust optimization procedure by replacing the components of largest magnitude of the stochastic gradient with a finite-difference (FD) approximation of the pertinent partial derivatives. In essence, the objective of the method is to determine which components of the unknown true gradient are most important and then replace the corresponding components of the stochastic gradient with more-accurate FD approximations. This modified gradient can then be used in a gradient-based optimization algorithm to find the minimum or maximum of a given cost function. Our focus application is the estimation of optimal well controls, but it is clear that the method could also be used for other applications, including history matching.


2009 ◽  
Vol 12 (04) ◽  
pp. 528-541 ◽  
Author(s):  
Adedayo Oyerinde ◽  
Akhil Datta-Gupta ◽  
William J. Milliken

Summary Streamline-based assisted and automatic history matching techniques have shown great potential in reconciling high resolution geologic models to production data. However, a major drawback of these approaches has been incompressibility or slight compressibility assumptions that have limited applications to two-phase water/oil displacements only. Recent generalization of streamline models to compressible flow has greatly expanded the scope and applicability of streamline-based history matching, in particular for three-phase flow. In our previous work, we calibrated geologic models to production data by matching the water cut (WCT) and gas/oil ratio (GOR) using the generalized travel-time inversion (GTTI) technique. For field applications, however, the highly nonmonotonic profile of the GOR data often presents a challenge to this technique. In this work we present a transformation of the field production data that makes it more amenable to GTTI. Further, we generalize the approach to incorporate bottomhole flowing pressure during three-phase history matching. We examine the practical feasibility of the method using a field-scale synthetic example (SPE-9 comparative study) and a field application. The field case is a highly faulted, west-African reservoir with an underlying aquifer. The reservoir is produced under depletion with three producers, and over thirty years of production history. The simulation model has several pressure/volume/temperature (PVT) and special core analysis (SCAL) regions and more than 100,000 cells. The GTTI is shown to be robust because of its quasilinear properties as demonstrated by the WCT and GOR match for a period of 30 years of production history.


Reservoir modelling and production forecasting can provide vital inputs to the efficient management of petroleum. Since the reservoirs are highly heterogeneous and nonlinear in nature, it is often difficult to obtain accurate estimates of the spatial distribution of reservoir properties representing the reservoir and corresponding production profiles. If an accurate model of a reservoir is built, it can lead to efficient management of the reservoir. This paper describes the mathematical modelling of oil reservoirs along with various optimization techniques applicable for history matching and production forecasting. Gradient based and non-gradient based optimization techniques viz. Simulated Annealing (SA), Scatter Search (SS), Neighborhood algorithm (NA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Ensemble Kalman Filters (EnKF) and Genetic Algorithm (GA) and their application to reservoir production history matching and performance are presented. The recent advancements and variants of these techniques applied for the purpose are also presented.


Geophysics ◽  
2019 ◽  
Vol 85 (1) ◽  
pp. M15-M31 ◽  
Author(s):  
Mingliang Liu ◽  
Dario Grana

We have developed a time-lapse seismic history matching framework to assimilate production data and time-lapse seismic data for the prediction of static reservoir models. An iterative data assimilation method, the ensemble smoother with multiple data assimilation is adopted to iteratively update an ensemble of reservoir models until their predicted observations match the actual production and seismic measurements and to quantify the model uncertainty of the posterior reservoir models. To address computational and numerical challenges when applying ensemble-based optimization methods on large seismic data volumes, we develop a deep representation learning method, namely, the deep convolutional autoencoder. Such a method is used to reduce the data dimensionality by sparsely and approximately representing the seismic data with a set of hidden features to capture the nonlinear and spatial correlations in the data space. Instead of using the entire seismic data set, which would require an extremely large number of models, the ensemble of reservoir models is iteratively updated by conditioning the reservoir realizations on the production data and the low-dimensional hidden features extracted from the seismic measurements. We test our methodology on two synthetic data sets: a simplified 2D reservoir used for method validation and a 3D application with multiple channelized reservoirs. The results indicate that the deep convolutional autoencoder is extremely efficient in sparsely representing the seismic data and that the reservoir models can be accurately updated according to production data and the reparameterized time-lapse seismic data.


Sign in / Sign up

Export Citation Format

Share Document