Fast History Matching and Optimization Using a Novel Physics-BasedData-Driven Model: An Application to a Diatomite Reservoir

SPE Journal ◽  
2021 ◽  
pp. 1-20
Author(s):  
Z. Wang ◽  
J. He ◽  
W. J. Milliken ◽  
X. -H. Wen

Summary Full-physics models in history matching (HM) and optimization can be computationally expensive because these problems usually require hundreds of simulations or more. In a previous study, a physics-baseddata-driven network model was implemented with a commercial simulator that served as a surrogate without the need to build a 3D geological model. In this paper, the network model is reconstructed to account for complex reservoir conditions of mature fields and successfully apply it to a diatomite reservoir in the San Joaquin Valley, California, for rapid HM and optimization. The reservoir is simplified into a network of 1D connections between well perforations. These connections are discretized into gridblocks, and the grid properties are calibrated to historical production data. Elevation change, saturation distribution, capillary pressure, and relative permeability are accounted for to best represent the mature field conditions. To simulate this physics-based network model through a commercial simulator, an equivalent Cartesian model is designed where rows correspond to the previously mentioned connections. Thereafter, the HM can be performed with the ensemble smoother with multiple data assimilation (ESMDA) algorithm under a sequential iterative process. A representative model after HM is then used for well control optimization. The network model methodology has been successfully applied to the waterflood optimization for a 56-well sector model of a diatomite reservoir in the San Joaquin Valley. HM results show that the network model matches with field level production history and gives reasonable matches for most of the wells, including pressure and volumetric data. The calibrated posterior ensemble of HM yields a satisfactory production prediction that is verified by the remaining historical data. For well control optimization, the P50 model is selected to maximize the net present value (NPV) in 5 years under provided well/field constraints. This confirms that the calibrated network model is accurate enough for production forecasts and optimization. The use of a commercial simulator in the network model provided flexibility to account for complex physics, such as elevation difference between wells, saturation nonequilibrium, and strong capillary pressure. Unlike the traditional big-loop workflow that relies on a detailed characterization of geological models, the proposed network model only requires production data and can be built and updated rapidly. The model also runs much faster (tens of seconds) than a full-physics model because of the use of much fewer gridblocks. To our knowledge, this is the first time this physics-baseddata-driven network model is applied with a commercial simulator on a field waterflood case. Unlike approaches developed with analytic solutions, the use of a commercial simulator makes it feasible to be further extended for complex processes (e.g., thermal or compositional flow). It serves as a useful surrogate model for both fast and reliable decision-making in reservoir management.

2021 ◽  
Author(s):  
Zhenzhen Wang ◽  
Jincong He ◽  
William J. Milliken ◽  
Xian-Huan Wen

Abstract Full-physics models in history matching and optimization can be computationally expensive since these problems usually require hundreds of simulations or more. We have previously implemented a physics-based data-driven network model with a commercial simulator that serves as a surrogate without the need to build the 3-D geological model. In this paper, we reconstruct the network model to account for complex reservoir conditions of mature fields and successfully apply it to a diatomite reservoir in the San Joaquin Valley (SJV) for rapid history matching and optimization. The reservoir is simplified into a network of 1-D connections between well perforations. These connections are discretized into grid blocks and the grid properties are calibrated to historical production data. Elevation change, saturation distribution, capillary pressure, and relative permeability are accounted for to best represent the mature field conditions. To simulate this physics-based network model through a commercial simulator, an equivalent 2-D Cartesian model is designed where rows correspond to the above-mentioned connections. Thereafter, the history matching can be performed with the Ensemble Smoother with Multiple Data Assimilation (ESMDA) algorithm under a sequential iterative process. A representative model after history matching is then employed for well control optimization. The network model methodology has been successfully applied to the waterflood optimization for a 56-well sector model of a diatomite reservoir in the SJV. History matching result shows that the network model honors field-level production history and gives reasonable matches for most of the wells, including pressure and flow rate. The calibrated ensemble from the last iteration of history matching yields a satisfactory production prediction, which is verified by the remaining historical data. For well control optimization, we select the P50 model to maximize the Net Present Value (NPV) in 5 years under provided well/field constraints. This confirms that the calibrated network model is accurate enough for production forecasts and optimization. The use of a commercial simulator in the network model provided flexibility to account for complex physics, such as elevation difference between wells, saturation non-equilibrium, and strong capillary pressure. Unlike traditional big-loop workflow that relies on a detailed characterization of geological models, the proposed network model only requires production data and can be built and updated rapidly. The model also runs much faster (tens of seconds) than a full-physics model due to the employment of much fewer grid blocks. To our knowledge, this is the first time this physics-based data-driven network model is applied with a commercial simulator on a field waterflood case. Unlike approaches developed with analytic solutions, the use of commercial simulator makes it feasible to be further extended for complex processes, e.g., thermal or compositional flow. It serves as an useful surrogate model for both fast and reliable decision-making in reservoir management.


2019 ◽  
Vol 24 (6) ◽  
pp. 1943-1958 ◽  
Author(s):  
V. L. S. Silva ◽  
M. A. Cardoso ◽  
D. F. B. Oliveira ◽  
R. J. de Moraes

AbstractIn this work, we discuss the application of stochastic optimization approaches to the OLYMPUS case, a benchmark challenge which seeks the evaluation of different techniques applied to well control and field development optimization. For that matter, three exercises have been proposed, namely, (i) well control optimization; (ii) field development optimization; and (iii) joint optimization. All applications were performed considering the so-called OLYMPUS case, a synthetic reservoir model with geological uncertainty provided by TNO (Fonseca 2018). Firstly, in the well control exercise, we successfully applied an ensemble-based approximate gradient method in a robust optimization formulation. Secondly, we solve the field development exercise using a genetic algorithm framework designed with special features for the problem of interest. Finally, in order to evaluate further gains, a sequential optimization approach was employed, in which we run one more well control optimization based on the optimal well locations. Even though we utilize relatively well-known techniques in our studies, we describe the necessary adaptations to the algorithms that enable their successful applications to real-life scenarios. Significant gains in the expected net present value are obtained: in exercise (i) a gain of 7% with respect to reactive control; for exercise (ii) a gain of 660% with respect to a initial well placement based on an engineering approach; and for (iii) an extra gain of 3% due to an additional well control optimization after the well placement optimization. All these gains are obtained with an affordable computational cost via the extensive utilization of high-performance computing (HPC) infrastructure. We also apply a scenario reduction technique to exercise (i), with similar gains obtained in the full ensemble optimization, however, with substantially inferior computational cost. In conclusion, we demonstrate how the state-of-the-art optimization technology available in the model-based reservoir management literature can be successfully applied to field development optimization via the conscious utilization of HPC facilities.


2001 ◽  
Vol 4 (06) ◽  
pp. 455-466 ◽  
Author(s):  
A. Graue ◽  
T. Bognø ◽  
B.A. Baldwin ◽  
E.A. Spinler

Summary Iterative comparison between experimental work and numerical simulations has been used to predict oil-recovery mechanisms in fractured chalk as a function of wettability. Selective and reproducible alteration of wettability by aging in crude oil at an elevated temperature produced chalk blocks that were strongly water-wet and moderately water-wet, but with identical mineralogy and pore geometry. Large scale, nuclear-tracer, 2D-imaging experiments monitored the waterflooding of these blocks of chalk, first whole, then fractured. This data provided in-situ fluid saturations for validating numerical simulations and evaluating capillary pressure- and relative permeability-input data used in the simulations. Capillary pressure and relative permeabilities at each wettability condition were measured experimentally and used as input for the simulations. Optimization of either Pc-data or kr-curves gave indications of the validity of these input data. History matching both the production profile and the in-situ saturation distribution development gave higher confidence in the simulations than matching production profiles only. Introduction Laboratory waterflood experiments, with larger blocks of fractured chalk where the advancing waterfront has been imaged by a nuclear tracer technique, showed that changing the wettability conditions from strongly water-wet to moderately water-wet had minor impact on the the oil-production profiles.1–3 The in-situ saturation development, however, was significantly different, indicating differences in oil-recovery mechanisms.4 The main objective for the current experiments was to determine the oil-recovery mechanisms at different wettability conditions. We have reported earlier on a technique that reproducibly alters wettability in outcrop chalk by aging the rock material in stock-tank crude oil at an elevated temperature for a selected period of time.5 After applying this aging technique to several blocks of chalk, we imaged waterfloods on blocks of outcrop chalk at different wettability conditions, first as a whole block, then when the blocks were fractured and reassembled. Earlier work reported experiments using an embedded fracture network,4,6,7 while this work also studied an interconnected fracture network. A secondary objective of these experiments was to validate a full-field numerical simulator for prediction of the oil production and the in-situ saturation dynamics for the waterfloods. In this process, the validity of the experimentally measured capillary pressure and relative permeability data, used as input for the simulator, has been tested at strongly water-wet and moderately water-wet conditions. Optimization of either Pc data or kr curves for the chalk matrix in the numerical simulations of the whole blocks at different wettabilities gave indications of the data's validity. History matching both the production profile and the in-situ saturation distribution development gave higher confidence in the simulations of the fractured blocks, in which only the fracture representation was a variable. Experimental Rock Material and Preparation. Two chalk blocks, CHP8 and CHP9, approximately 20×12×5 cm thick, were obtained from large pieces of Rørdal outcrop chalk from the Portland quarry near Ålborg, Denmark. The blocks were cut to size with a band saw and used without cleaning. Local air permeability was measured at each intersection of a 1×1-cm grid on both sides of the blocks with a minipermeameter. The measurements indicated homogeneous blocks on a centimeter scale. This chalk material had never been contacted by oil and was strongly water-wet. The blocks were dried in a 90°C oven for 3 days. End pieces were mounted on each block, and the whole assembly was epoxy coated. Each end piece contained three fittings so that entering and exiting fluids were evenly distributed with respect to height. The blocks were vacuum evacuated and saturated with brine containing 5 wt% NaCl+3.8 wt% CaCl2. Fluid data are found in Table 1. Porosity was determined from weight measurements, and the permeability was measured across the epoxy-coated blocks, at 2×10–3 µm2 and 4×10–3 µm2, for CHP8 and CHP9, respectively (see block data in Table 2). Immobile water saturations of 27 to 35% pore volume (PV) were established for both blocks by oilflooding. To obtain uniform initial water saturation, Swi, oil was injected alternately at both ends. Oilfloods of the epoxy-coated block, CHP8, were carried out with stock-tank crude oil in a heated pressure vessel at 90°C with a maximum differential pressure of 135 kPa/cm. CHP9 was oilflooded with decane at room temperature. Wettability Alteration. Selective and reproducible alteration of wettability, by aging in crude oil at elevated temperatures, produced a moderately water-wet chalk block, CHP8, with similar mineralogy and pore geometry to the untreated strongly water-wet chalk block CHP9. Block CHP8 was aged in crude oil at 90°C for 83 days at an immobile water saturation of 28% PV. A North Sea crude oil, filtered at 90°C through a chalk core, was used to oilflood the block and to determine the aging process. Two twin samples drilled from the same chunk of chalk as the cut block were treated similar to the block. An Amott-Harvey test was performed on these samples to indicate the wettability conditions after aging.8 After the waterfloods were terminated, four core plugs were drilled out of each block, and wettability measurements were conducted with the Amott-Harvey test. Because of possible wax problems with the North Sea crude oil used for aging, decane was used as the oil phase during the waterfloods, which were performed at room temperature. After the aging was completed for CHP8, the crude oil was flushed out with decahydronaphthalene (decalin), which again was flushed out with n-decane, all at 90°C. Decalin was used as a buffer between the decane and the crude oil to avoid asphalthene precipitation, which may occur when decane contacts the crude oil.


2021 ◽  
Author(s):  
Tsubasa Onishi ◽  
Hongquan Chen ◽  
Jiang Xie ◽  
Shusei Tanaka ◽  
Dongjae Kam ◽  
...  

Abstract Streamline-based methods have proven to be effective for various subsurface flow and transport modeling problems. However, the applications are limited in dual-porosity and dual-permeability (DPDK) system due to the difficulty in describing interactions between matrix and fracture during streamline tracing. In this work, we present a robust streamline tracing algorithm for DPDK models and apply the new algorithm to rate allocation optimization in a waterflood reservoir. In the proposed method, streamlines are traced in both fracture and matrix domains. The inter-fluxes between fracture and matrix are described by switching streamlines from one domain to another using a probability computed based on the inter-fluxes. The approach is fundamentally similar to the existing streamline tracing technique and can be utilized in streamline-assisted applications, such as flow diagnostics, history matching, and production optimization. The proposed method is benchmarked with a finite-volume based approach where grid-based time-of-flight was obtained by solving the stationary transport equation. We first validated our method using simple examples. Visual time-of-flight comparisons as well as tracer concentration and allocation factors at wells show good agreement. Next, we applied the proposed method to field scale models to demonstrate the robustness. The results show that our method offers reduced numerical artifacts and better represents reservoir heterogeneity and well connectivity with sub-grid resolutions. The proposed method is then used for rate allocation optimization in DPDK models. A streamline-based gradient free algorithm is used to optimize net present value by adjusting both injection and production well rates under operational constraints. The results show that the optimized schedule offers significant improvement in recovery factor, net present value, and sweep efficiency compared to the base scenario using equal rate injection and production. The optimization algorithm is computationally efficient as it requires only a few forward reservoir simulations.


2021 ◽  
Author(s):  
Elizabeth Ruiz ◽  
Brandon Thibodeaux ◽  
Christopher Dorion ◽  
Herman Mukisa ◽  
Majid Faskhoodi ◽  
...  

Abstract Optimized geomodeling and history matching of production data is presented by utilizing an integrated rock and fluid workflow. Facies identification is performed by use of image logs and other geological information. In addition, image logs are used to help define structural geodynamic processes that occurred in the reservoir. Methods of reservoir fluid geodynamics are used to assess the extent of fluid compositional equilibrium, especially the asphaltenes, and thereby the extent of connectivity in these facies. Geochemical determinations are shown to be consistent with measurements of compositional thermodynamic equilibrium. The ability to develop the geo-scenario of the reservoir, the coherent evolution of rock and contained fluids in the reservoir over geologic time, improves the robustness of the geomodel. In particular, the sequence of oil charge, compositional equilibrium, fault block throw, and primary biogenic gas charge are established in this middle Pliocene reservoir with implications for production, field extension,and local basin exploration. History matching of production data prove the accuracy of the geomodel; nevertheless, refinements to the geomodel and improved history matching were obtained by expanded deterministic property estimation from wireline log and other data. Theearly connection of fluid data, both thermodynamic and geochemical, with relevant facies andtheir properties determination enables a more facile method to incorporate this data into the geomodel. Logging data from future wells in the field can be imported into the geomodel allowingdeterministic optimization of this model long after production has commenced. While each reservoir is unique with its own idiosyncrasies, the workflow presented here is generally applicable to all reservoirs and always improves reservoir understanding.


FLORESTA ◽  
2019 ◽  
Vol 49 (4) ◽  
pp. 735
Author(s):  
Luan Demarco Fiorentin ◽  
Julio Eduardo Arce ◽  
Allan Libanio Pelissari ◽  
Rodrigo Otávio Veiga de Miranda ◽  
Thaís Wisniewski de Freitas

This study aimed to evaluated two optimized planning strategies and analyze their performance in timber production. Data were obtained in Pinus spp. stands from a forestry company with unbalanced planted area over time. Maximization models of forest production (1) and net present value (2) were formulated and two minimization objective functions of the production deviation (3) and minimum and maximum production oscillation (4) were tested as alternatives to the traditional models. The highest thinning and clearcutting average areas were obtained in strategy 1. Strategies 1 and 2 resulted in the greatest variability of forestry operations. All strategies resulted in the highest timber production for sawn and special sawn wood and the lowest for veneer, while the pulpwood volume was almost constant. Strategies 1 and 2 provided the highest average timber volume and the greatest variability in the production, while strategies 3 and 4 were more efficient, since they supplied the industrial demand with homogeneous production.


SPE Journal ◽  
2018 ◽  
Vol 23 (05) ◽  
pp. 1496-1517 ◽  
Author(s):  
Chaohui Chen ◽  
Guohua Gao ◽  
Ruijian Li ◽  
Richard Cao ◽  
Tianhong Chen ◽  
...  

Summary Although it is possible to apply traditional optimization algorithms together with the randomized-maximum-likelihood (RML) method to generate multiple conditional realizations, the computation cost is high. This paper presents a novel method to enhance the global-search capability of the distributed-Gauss-Newton (DGN) optimization method and integrates it with the RML method to generate multiple realizations conditioned to production data synchronously. RML generates samples from an approximate posterior by minimizing a large ensemble of perturbed objective functions in which the observed data and prior mean values of uncertain model parameters have been perturbed with Gaussian noise. Rather than performing these minimizations in isolation using large sets of simulations to evaluate the finite-difference approximations of the gradients used to optimize each perturbed realization, we use a concurrent implementation in which simulation results are shared among different minimization tasks whenever these results are helping to converge to the global minimum of a specific minimization task. To improve sharing of results, we relax the accuracy of the finite-difference approximations for the gradients with more widely spaced simulation results. To avoid trapping in local optima, a novel method to enhance the global-search capability of the DGN algorithm is developed and integrated seamlessly with the RML formulation. In this way, we can improve the quality of RML conditional realizations that sample the approximate posterior. The proposed work flow is first validated with a toy problem and then applied to a real-field unconventional asset. Numerical results indicate that the new method is very efficient compared with traditional methods. Hundreds of data-conditioned realizations can be generated in parallel within 20 to 40 iterations. The computational cost (central-processing-unit usage) is reduced significantly compared with the traditional RML approach. The real-field case studies involve a history-matching study to generate history-matched realizations with the proposed method and an uncertainty quantification of production forecasting using those conditioned models. All conditioned models generate production forecasts that are consistent with real-production data in both the history-matching period and the blind-test period. Therefore, the new approach can enhance the confidence level of the estimated-ultimate-recovery (EUR) assessment using production-forecasting results generated from all conditional realizations, resulting in significant business impact.


2021 ◽  
Author(s):  
Bashar Alramahi ◽  
Qaed Jaafar ◽  
Hisham Al-Qassab

Abstract Classifying rock facies and estimating permeability is particularly challenging in Microporous dominated carbonate rocks. Reservoir rock types with a very small porosity range could have up to two orders of magnitude permeability difference resulting in high uncertainty in facies and permeability assignment in static and dynamic models. While seismic and conventional porosity logs can guide the mapping of large scale features to define resource density, estimating permeability requires the integration of advanced logs, core measurements, production data and a general understanding of the geologic depositional setting. Core based primary drainage capillary pressure measurements, including porous plate and mercury injection, offer a valuable insight into the relation between rock quality (i.e., permeability, pore throat size) and water saturation at various capillary pressure levels. Capillary pressure data was incorporated into a petrophysical workflow that compares current (Archie) water saturation at a particular height above free water level (i.e., capillary pressure) to the expected water saturation from core based capillary pressure measurements of various rock facies. This was then used to assign rock facies, and ultimately, estimate permeability along the entire wellbore, differentiating low quality microporous rocks from high quality grainstones with similar porosity values. The workflow first requires normalizing log based water saturations relative to structural position and proximity to the free water level to ensure that the only variable impacting current day water saturation is reservoir quality. This paper presents a case study where this workflow was used to detect the presence of grainstone facies in a giant Middle Eastern Carbonate Field. Log based algorithms were used to compare Archie water saturation with primary drainage core based saturation height functions of different rock facies to detect the presence of grainstones and estimate their permeability. Grainstones were then mapped spatially over the field and overlaid with field wide oil production and water injection data to confirm a positive correlation between predicted reservoir quality and productivity/injectivity of the reservoir facies. Core based permeability measurements were also used to confirm predicted permeability trends along wellbores where core was acquired. This workflow presents a novel approach in integrating core, log and dynamic production data to map high quality reservoir facies guiding future field development strategy, workover decisions, and selection of future well locations.


Sign in / Sign up

Export Citation Format

Share Document