Integration of Well Test Pressure Data Into Heterogeneous Geological Reservoir Models

Author(s):  
Gaoming Li ◽  
Mei Han ◽  
Raj Banerjee ◽  
Albert Coburn Reynolds
2010 ◽  
Vol 13 (03) ◽  
pp. 496-508 ◽  
Author(s):  
Gaoming Li ◽  
Mei Han ◽  
Raj Banerjee ◽  
Albert C. Reynolds

2005 ◽  
Vol 8 (02) ◽  
pp. 113-121 ◽  
Author(s):  
Michael M. Levitan

Summary Pressure/rate deconvolution is a long-standing problem of well-test analysis that has been the subject of research by a number of authors. A variety of different deconvolution algorithms have been proposed in the literature. However, none of them is robust enough to be implemented in the commercial well-test-analysis software used most widely in the industry. Recently, vonSchroeter et al.1,2 published a deconvolution algorithm that has been shown to work even when a reasonable level of noise is present in the test pressure and rate data. In our independent evaluation of the algorithm, we have found that it works well on consistent sets of pressure and rate data. It fails, however, when used with inconsistent data. Some degree of inconsistency is normally present in real test data. In this paper, we describe the enhancements of the deconvolution algorithm that allow it to be used reliably with real test data. We demonstrate the application of pressure/rate deconvolution analysis to several real test examples. Introduction The well bottomhole-pressure behavior in response to a constant-rate flow test is a characteristic response function of the reservoir/well system. The constant-rate pressure-transient response depends on such reservoir and well properties as permeability, large-scale reservoir heterogeneities, and well damage (skin factor). It also depends on the reservoir flow geometry defined by the geometry of well completion and by reservoir boundaries. Hence, these reservoir and well characteristics are reflected in the system's constant-rate drawdown pressure-transient response, and some of these reservoir and well characteristics may potentially be recovered from the response function by conventional methods of well-test analysis. Direct measurement of constant-rate transient-pressure response does not normally yield good-quality data because of our inability to accurately control rates and because the well pressure is very sensitive to rate variations. For this reason, typical well tests are not single-rate, but variable-rate, tests. A well-test sequence normally includes several flow periods. During one or more of these flow periods, the well is shut in. Often, only the pressure data acquired during shut-in periods have the quality required for pressure-transient analysis. The pressure behavior during the individual flow period of a multirate test sequence depends on the flow history before this flow period. Hence, it is not the same as a constant-rate system-response function. The well-test-analysis theory that evolved over the past 50 years has been built around the idea of applying a special time transform to the test pressure data so that the pressure behavior during individual flow periods would be similar in some way to constant-rate drawdown-pressure behavior. The superposition-time transform commonly used for this purpose does not completely remove all effects of previous rate variation. There are sometimes residual superposition effects left, and this often complicates test analysis. An alternative approach is to convert the pressure data acquired during a variable-rate test to equivalent pressure data that would have been obtained if the well flowed at constant rate for the duration of the whole test. This is the pressure/rate deconvolution problem. Pressure/rate deconvolution has been a subject of research by a number of authors over the past 40 years. Pressure/rate deconvolution reduces to the solution of an integral equation. The kernel and the right side of the equation are given by the rate and the pressure data acquired during a test. This problem is ill conditioned, meaning that small changes in input (test pressure and rates) lead to large changes in output result—a deconvolved constant-rate pressure response. The ill-conditioned nature of the pressure/rate deconvolution problem, combined with errors always present in the test rate and pressure data, makes the problem highly unstable. A variety of different deconvolution algorithms have been proposed in the literature.3–8 However, none of them is robust enough to be implemented in the commercial well-test-analysis software used most widely in the industry. Recently, von Schroeter et al.1,2 published a deconvolution algorithm that has been shown to work when a reasonable level of noise is present in test pressure and rate data. In our independent implementation and evaluation of the algorithm, we have found that it works well on consistent sets of pressure and rate data. It fails, however, when used with inconsistent data. Examples of such inconsistencies include wellbore storage or skin factor changing during a well-test sequence. Some degree of inconsistency is almost always present in real test data. Therefore, the deconvolution algorithm in the form described in the references cited cannot work reliably with real test data. In this paper, we describe the enhancements of the deconvolution algorithm that allow it to be used reliably with real test data. We demonstrate application of the pressure/rate deconvolution analysis to several real test examples.


SPE Journal ◽  
1996 ◽  
Vol 1 (04) ◽  
pp. 413-426 ◽  
Author(s):  
A.C. Reynolds ◽  
Nanqun He ◽  
Lifu Chu ◽  
D.S. Oliver

2000 ◽  
Vol 3 (01) ◽  
pp. 74-79 ◽  
Author(s):  
Nanqun He ◽  
Dean S. Oliver ◽  
Albert C. Reynolds

Summary Generating realizations of reservoir permeability and porosity fields that are conditional to static and dynamic data are difficult. The constraints imposed by dynamic data are typically nonlinear and the relationship between the observed data and the petrophysical parameters is given by a flow simulator which is expensive to run. In addition, spatial organization of real rock properties is quite complex. Thus, most attempts at conditioning reservoir properties to dynamic data have either approximated the relationship between data and parameters so that complex geologic models could be used, or have used simplified spatial models with actual production data. In this paper, we describe a multistep procedure for efficiently generating realizations of reservoir properties that honor dynamic data from complex stochastic models. First, we generate a realization of the rock properties that is conditioned to static data, but not to the pressure data. Second, we generate a realization of the production data (i.e., add random errors to the production data). Third, we find the property field that is as close as possible to the uncalibrated realization and also honors the realization of the production data. The ensemble of realizations generated by this procedure often provides a good empirical approximation to the posteriori probability density function for reservoir models and can be used for Monte Carlo inference. We apply the above procedure to the problem of conditioning a three-dimensional stochastic model to data from two well tests. The real-field example contains two facies. Permeabilities within each facies were generated using a "cloud transform" that honored the observed scatter in the crossplot of permeability and porosity. We cut a volume, containing both test wells, from the full-field model, then scaled it up to about 9,000 cells before calibrating to pressure data. Although the well-test data were of poor quality, the data provided information to modify the permeabilities within the regions of investigations and on the overall permeability average. Introduction The problem of generating plausible reservoir models that are conditional to dynamic or production-type data has been an active area of research for several years. Existing studies can be classified by the way in which they approach three key aspects of the problem:Complexity of the stochastic geologic or petrophysical model.Method of computing pressure response from a reservoir model.Attention to the problem of sampling realizations from the a posteriori probability density function. Most researchers have worked with simple models (e.g., characterized by a variogram), an effective well-test permeability instead of a flow simulator, and largely ignored the problem of sampling. Other, more sophisticated examples include the use of a complex stochastic geologic model (channels), and simulated annealing to sample from the a posteriori probability distribution function (PDF), but an effective well-test permeability instead of pressure data (and a simulator) for conditioning.1 The works by Oliver2 and by Chu et al.3 provide other examples. In these cases, a flow simulator was used for conditioning but the geology was relatively simple and realizations were generated using a linearization approximation around the maximum a posteriori model. Landa4 treated the problem of conditioning two-dimensional channels, but chose a simple model that could be described by a few parameters. A large part of our effort has gone into ensuring that the ensemble of realizations that we generated would be representative of the uncertainty in the reservoir properties. In order to do this rigorously, we have used the actual pressure data but have had to limit ourselves to Gaussian random fields and to fairly small synthetic models. We recently applied Markov chain Monte Carlo (MCMC) methods5 to generate an ensemble of realizations because we believe they provide the best framework for ensuring that we obtain a representative set of realizations suitable for making economic decisions. The principal advantage of MCMC is that it provides a method for sampling realizations from complicated probability distributions such as the distributions of reservoirs conditional to production data. The method consists of a proposal of a new realization, and a decision as to whether to accept the proposed realization, or to again accept the current realization. The "chain" refers to the sequence of accepted realizations and "Monte Carlo" refers to the stochastic aspect in the proposal acceptance steps. Unfortunately, it appears to be impractical to use MCMC methods for generating realizations that are conditional to production data. If realizations are proposed from a relatively simple probability density function (e.g., multivariate Gaussian), then most realizations are rejected and the method is inefficient. Alternatively, if realizations are proposed from a PDF that is complicated but close to the desired PDF, the Metropolis-Hastings criterion, which involves the ratio of the probability of proposing the proposed realization to the probability of proposing the current realization, is difficult to evaluate. Oliver et al.6 proposed a methodology for incorporating production data that followed the second approach but ignored the Metropolis-Hastings criterion, instead accepting every realization. We showed that the method is rigorously valid for conditioning Gaussian random fields to linear data (i.e., weighted averages of model variables) and is easily adapted to more complex geostatistical models and types of data. Although the method is then not rigorously correct, we have shown that the distribution of realizations is good for simple, but highly nonlinear problems. The realizations generated using this methodology still honor all the data—the ensemble of realizations is, however, not a perfect representation of the true distribution even as the number of realizations becomes very large.


1994 ◽  
Vol 46 (07) ◽  
pp. 607-615 ◽  
Author(s):  
G.S. Feitosa ◽  
Lifu Chu ◽  
L.G. Thompson ◽  
A.C. Reynolds

2020 ◽  
Vol 17 (5) ◽  
pp. 899-902
Author(s):  
Felipe de Oliveira ◽  
Andrea Lins ◽  
Abelardo Barreto ◽  
Marcos Craizer ◽  
Helio Lopes ◽  
...  

Mathematics ◽  
2019 ◽  
Vol 7 (10) ◽  
pp. 989
Author(s):  
Fengbo Zhang ◽  
Yuandan Zheng ◽  
Zhenyu Zhao ◽  
Zhi Li

In this paper, noise removing of the well test data is considered. We use the Legendre expansion to approximate well test data and a truncated strategy has been employed to reduce noise. The parameter of the truncation will be chosen by a discrepancy principle and a corresponding convergence result has been obtained. The theoretical analysis shows that a well numerical approximation can be obtained by the new method. Moreover, we can directly obtain the stable numerical derivatives of the pressure data in this method. Finally, we give some numerical tests to show the effectiveness of the method.


2018 ◽  
Vol 8 (4) ◽  
pp. 1519-1534 ◽  
Author(s):  
Seyedeh Robab Moosavi ◽  
Jafar Qajar ◽  
Masoud Riazi

Sign in / Sign up

Export Citation Format

Share Document