Geostatistical simulation of reservoir porosity distribution from 3‐D, 3‐C seismic and core data in the lower Nisku formation at Joffre field, Alberta

1996 ◽  
Author(s):  
Raúl Cabrera‐Garźn ◽  
Thomas L. Davis ◽  
John F. Arestad
Geophysics ◽  
2005 ◽  
Vol 70 (3) ◽  
pp. H1-H10 ◽  
Author(s):  
Jens Tronicke ◽  
Klaus Holliger

High-resolution geophysical parameter information, as it can be provided, for example, by crosshole georadar and seismic tomography, has proven to provide useful spatial information to complement traditional hydrological methods such as core analyses, logging techniques, and tracer or pumping tests. Quantitative integration of these diverse database components is one of the major challenges in the field of high-resolution hydrogeophysics because of their different scales of measurement and the usually weak petrophysical relations among the measurements. In this study, we systematically explore the usefulness of a conditional stochastic simulation approach based on simulated annealing for this purpose. First, we generate a realistic model of an alluvial aquifer consisting of a 2D scale-invariant porosity field. On the basis of this model, we generate synthetic neutron porosity logs and crosshole georadar tomographic surveys. We then use the proposed geostatistical simulation approach to integrate this hydrogeophysical database. The effectiveness of this approach to characterize the detailed porosity distribution in heterogeneous alluvial aquifers is assessed by comparing the results for a variety of simulated porosity fields that differ fundamentally in terms of their conditioning information. Our results indicate this approach has the potential to allow for a realistic hydrogeophysical characterization in the submeter range of the porosity distribution in heterogeneous alluvial aquifers.


1998 ◽  
Vol 10 (1-3) ◽  
pp. 57-72 ◽  
Author(s):  
K. S. B. Keats-Rohan

The COEL database and database software, a combined reference and research tool created by historians for historians, is presented here through Screenshots illustrating the underlying theoretical model and the specific situation to which that has been applied. The key emphases are upon data integrity, and the historian's role in interpreting and manipulating what is often contentious data. From a corpus of sources (Level 1) certain core data are extracted for separate treatment at an interpretive level (Level 3), based upon a master list of the core data (Level 2). The core data are interdependent: each record in Level 2 is of interest in itself; and it either could or should be associated with an(other) record(s) as a specific entity. Sometimes the sources are ambiguous and the association is contentious, necessitating a probabilty-coding approach. The entities created by the association process can then be treated at a commentary level, introducing material external to the database, whether primary or secondary sources. A full discussion of the difficulties is provided within a synthesis of available information on the core data. Direct access to the source texts is only ever a mouse click away. Fully query able, COEL is formidable look-up and research tool for users of all levels, who remain free to exercise an alternative judgement on the associations of the core data. In principle, there is no limit on the type of text or core data that could be handled in such a system.


2020 ◽  
Author(s):  
Tianqi Deng ◽  
◽  
Joaquín Ambía ◽  
Carlos Torres-Verdín ◽  
◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document