Influence of Additional Objective Functions on Uncertainty Reduction and History Matching

Author(s):  
Forlan La Rosa Almeida ◽  
Helena Nandi Formentin ◽  
Célio Maschio ◽  
Alessandra Davolio ◽  
Denis José Schiozer
SPE Journal ◽  
2020 ◽  
Vol 25 (04) ◽  
pp. 2119-2142
Author(s):  
Carla Janaina Ferreira ◽  
Ian Vernon ◽  
Camila Caiado ◽  
Helena Nandi Formentin ◽  
Guilherme Daniel Avansi ◽  
...  

Summary When performing classic uncertainty reduction according to dynamic data, a large number of reservoir simulations need to be evaluated at high computational cost. As an alternative, we construct Bayesian emulators that mimic the dominant behavior of the reservoir simulator, and which are several orders of magnitude faster to evaluate. We combine these emulators within an iterative procedure that involves substantial but appropriate dimensional reduction of the output space (which represents the reservoir physical behavior, such as production data), enabling a more effective and efficient uncertainty reduction on the input space (representing uncertain reservoir parameters) than traditional methods, and with a more comprehensive understanding of the associated uncertainties. This study uses the emulation-based Bayesian history-matching (BHM) uncertainty analysis for the uncertainty reduction of complex models, which is designed to address problems with a high number of both input and output parameters. We detail how to efficiently choose sets of outputs that are suitable for emulation and that are highly informative to reduce the input-parameter space and investigate different classes of outputs and objective functions. We use output emulators and implausibility analysis iteratively to perform uncertainty reduction in the input-parameter space, and we discuss the strengths and weaknesses of certain popular classes of objective functions in this context. We demonstrate our approach through an application to a benchmark synthetic model (built using public data from a Brazilian offshore field) in an early stage of development using 4 years of historical data and four producers. This study investigates traditional simulation outputs (e.g., production data) and also novel classes of outputs, such as misfit indices and summaries of outputs. We show that despite there being a large number (2,136) of possible outputs, only very few (16) were sufficient to represent the available information; these informative outputs were used using fast and efficient emulators at each iteration (or wave) of the history match to perform the uncertainty-reduction procedure successfully. Using this small set of outputs, we were able to substantially reduce the input space by removing 99.8% of the original volume. We found that a small set of physically meaningful individual production outputs were the most informative at early waves, which once emulated, resulted in the highest uncertainty reduction in the input-parameter space, while more complex but popular objective functions that combine several outputs were only modestly useful at later waves. The latter point is because objective functions such as misfit indices have complex surfaces that can lead to low-quality emulators and hence result in noninformative outputs. We present an iterative emulator-based Bayesian uncertainty-reduction process in which all possible input-parameter configurations that lead to statistically acceptable matches between the simulated and observed data are identified. This methodology presents four central characteristics: incorporation of a powerful dimension reduction on the output space, resulting in significantly increased efficiency; effective reduction of the input space; computational efficiency, and provision of a better understanding of the complex geometry of the input and output spaces.


2021 ◽  
Author(s):  
E. Noviyanto

This paper presents a probabilistic modeling and prediction workflow to capture the range of uncertainties and its application in a field with many wells and long history. A static model consisting of 19 layers and 293 wells was imported as the base model. Several reservoir properties such as relative permeability, PVT, aquifer, and initial condition were analyzed to obtain the range of uncertainties. The probabilistic history matching was done using Assisted History Matching (AHM) tools and divided into experimental design and optimization phases. The inputted parameters and their range sensitive to objective functions, e.g., oil rate/total difference, could be determined using a Pareto chart based on Pearson Correlation during experimental design. The optimization phase carried over the most sensitive parameters. It utilized Particle Swarm Optimization (PSO) algorithm to iterate the process and find the equiprobable models with minimum objective functions. After filtering a set of models created by AHM tools by the total oil production, field/well oil objective functions, the last three years' performance, and clustering using the k-means algorithm, there are 11 models left. These models were then analyzed to understand the final risk and parameter uncertainties, e.g., mobile oil or sweep efficiency. Three models representing P10, P50, and P90 were picked and used as the base models for developing waterflood scenario designs. Several scenarios were done, such as base case, perfect pattern case, and existing well case. The oil incremental is in the range of 1.60 – 2.01 MMSTB for the Base Case, 7.57 – 9.14 MMSTB for the Perfect Pattern Case, and 6.01 – 7.75 MMSTB for the Existing Well Case. This paper introduces the application of the probabilistic method for history matching and prediction. This method can engage the uncertainty of the dynamic model on the forecasted production profiles. In the end, this information could improve the quality of management decision-making in field development planning.


2021 ◽  
Author(s):  
Ecko Noviyanto ◽  
Deded Abdul Rohman ◽  
Theoza Nopranda ◽  
Rudini Simanjorang ◽  
Kosdar Gideon Haro ◽  
...  

Abstract This paper presents a probabilistic modeling and prediction workflow to capture the range of uncertainties and its application in a field with many wells and long history. A static model consisting of 19 layers and 293 wells was imported as the base model. Several reservoir properties such as relative permeability, PVT, aquifer, and initial condition were analyzed to obtain the range of uncertainties. The probabilistic history matching was done using Assisted History Matching (AHM) tools and divided into experimental design and optimization. The inputted parameters and their range sensitive to objective functions, e.g., oil rate/total difference, could be determined using a Pareto chart based on Pearson Correlation during experimental design. The optimization phase carried over the most sensitive parameters and utilized Particle Swarm Optimization (PSO) algorithm to iterate the process and find the equiprobable models with minimum objective functions. After filtering a set of models created by AHM tools by the total oil production, field/well oil objective functions, the last three years' performance, and clustering using the k-means algorithm, there are 11 models left. These models were then analyzed to understand the absolute risk and parameter uncertainties, e.g., mobile oil or sweep efficiency. Three models representing P10, P50, and P90 were picked and used as the base models for developing waterflood scenario designs. Several scenarios were done, such as base case, perfect pattern case, and existing well case. The oil incremental is in the range of 1.60 – 2.01 MMSTB for the Base Case, 7.57 – 9.14 MMSTB for the Perfect Pattern Case, and 6.01 – 7.75 MMSTB for the Existing Well Case. This paper introduces the application of the probabilistic method for history matching and prediction. This method can engage the uncertainty of the dynamic model on the forecasted production profiles. In the end, this information could improve the quality of management decision-making in field development planning.


SPE Journal ◽  
2016 ◽  
Vol 21 (04) ◽  
pp. 1400-1412 ◽  
Author(s):  
Jincong He ◽  
Jiang Xie ◽  
Pallav Sarma ◽  
Xian-Huan Wen ◽  
Wen H. Chen ◽  
...  

Summary Data-acquisition programs, such as surveillance and pilot, play an important role in reservoir management, and are crucial for minimizing subsurface risks and improving decision quality. Optimal design of the data-acquisition plan requires predicting the performance (e.g., in terms of the expected amount of uncertainty reduction in an objective function) of a given design before it is implemented. Because the data from the acquisition program are uncertain at the time of the analysis, multiple history-matching runs are required for different plausible realizations of the observed data to evaluate the expected effectiveness of the program in reducing uncertainty. As such, the computational cost may be prohibitive because the number of reservoir simulations needed for the multiple history-matching runs would be substantial. This paper proposes a framework on the basis of proxies and rejection sampling (filtering) to perform the multiple history-matching runs with a manageable number of reservoir simulations. The work flow proposed does not depend on the linear Gaussian assumption that is a common, yet questionable, assumption in existing methods. The work flow also enables both qualitative and quantitative analysis of a surveillance plan. Qualitatively, heavy-hitter alignment analysis for the objective function and the observed data provides actionable measures for screening different surveillance designs. Quantitatively, the evaluation of expected uncertainty reduction from different surveillance plans allows for optimal design and selection of surveillance plans.


Sign in / Sign up

Export Citation Format

Share Document