Unconventional reserves and resources in the Petroleum Resource Management System (PRMS): a square peg in a round hole?

2014 ◽  
Vol 54 (2) ◽  
pp. 518
Author(s):  
Douglas Peacock

Estimation and reporting of unconventional hydrocarbon reserves and resources have been a subject of intense focus in recent years. As unconventional hydrocarbons become increasingly important, it is essential that practices keep pace with a rapidly changing industry. The PRMS was primarily developed for conventional hydrocarbons although it is applicable to all accumulations, including unconventional. Force-fitting the PRMS for use in unconventional reservoirs is problematic. Many key areas would benefit from better definition and guidance. These areas include: assessment and reporting of prospective resources, definition of a prospect, risk assessment, definition of a discovery, extent of discovery, and linkage of reserves to the definition of project. Unconventional gas developments for LNG export present particular challenges. One primary purpose of reserves and resources definitions is to provide consistency of terminology and reporting for all parties involved including operators, investors, governments, and regulatory bodies. Within the industry, there is widespread acceptance that unconventional hydrocarbons are different, not only in how they are developed but also in how reserves and resources are evaluated and reported. Present practices may not fit neatly into the PRMS requirements, so compromises must be made. In particular, the PRMS axes of risk and uncertainty become blurred with present unconventional practices. This extended abstract highlights the many issues that make estimation and reporting of unconventional resources problematic within the PRMS and it suggests possible solutions to enable a more appropriate set of definitions and guidelines to be prepared.

2002 ◽  
Vol 5 (04) ◽  
pp. 302-310
Author(s):  
Herman G. Acuna ◽  
D.R. Harrell

Summary Probabilistic methods have introduced inconsistent interpretations of how they should be applied while still complying with reserves certification guidelines. The objective of this paper is to present and discuss some pitfalls commonly encountered in the application of probabilistic methods to evaluate reserves. Several regulatory guidelines that should be followed during the generation of recoverable hydrocarbon distributions are discussed. An example also is given to understand the evolution of reserves categories as a function of probabilities. Most of the conflicting reserves interpretations can be attributed to the constraints of regulatory bodies [e.g., the U.S. Securities and Exchange Commission (SEC)] and the current SPE/World Petroleum Congresses (WPC) reserves definitions in which reserves categories are expressed in terms of the probabilities of being achieved. For example, proved reserves are defined as those hydrocarbon volumes with at least a 90% probability of being equaled or exceeded (P90). Unfortunately, these definitions alone fall short as guidance on how to derive the distributions from which these percentiles will be calculated. This may lead to distributions that do not comply with the remaining guidelines. While a P90 can be calculated from a noncomplying distribution, proved reserves may not be assigned at this percentile level. Introduction In 1997, new reserves definitions were drafted and introduced by SPE and WPC. For the first time, these reserves definitions included some language to address the increased interest in probabilistic analysis to estimate hydrocarbon reserves. Proved reserves were defined, in part, as those volumes of recoverable hydrocarbons with " . . . a high degree of confidence that the quantities will be recovered. If probabilistic methods are used, there should be at least a 90% probability that the quantities actually recovered will equal or exceed the estimate."1 The interpretation of this definition may be that satisfying the P90 criteria is sufficient to define proved reserves. We will discuss later in this paper why defining proved reserves as the P90 of any distribution is not always appropriate. Also, the definitions do not specify at what level the evaluator should apply the P90 test (i.e., is it at the field level or the total portfolio level?). These points are further clarified in the 2001 update of the SPE/WPC definitions.2 Probable reserves were then described in the SPE/WPC definitions as those recoverable hydrocarbon volumes that " . . . are more likely than not to be recoverable. In this context, when probabilistic methods are used, there should be at least a 50% probability that the quantities actually recovered will equal or exceed the sum of estimated proved plus probable reserves."1 Possible reserves were defined as those recoverable hydrocarbon volumes that " . . . are less likely to be recoverable than probable reserves. In this context, when probabilistic methods are used, there should be at least a 10% probability that the quantities actually recovered will equal or exceed the sum of estimated proved plus probable plus possible reserves."1 The SEC does not recognize probable and possible reserves. The SEC's guidelines for reporting proved reserves are set forth in its Regulation S-X, Rule 4-10 and subsequent clarifying bulletins. In Regulation S-X, Rule 4-10, there are no guidelines for the interpretation of probabilistic analysis. The regulation defines proved reserves as those recoverable hydrocarbon volumes with " . . . reasonable certainty to be recoverable in future years from known reservoirs . . ."3 Both the SPE/WPC and SEC proved reserves definitions have several other requirements that are usually applicable to deterministic methods that may conflict with probabilistic analysis if not properly incorporated. Evaluators of reserves should exercise caution when using probabilistic methods to ensure compliance with the reserves definitions adopted by the SEC and SPE/WPC. Caution is required because there are certain situations in which indiscriminate application of probabilistic methods may produce results that are inconsistent with the reserves definitions. For example, the SEC definition of proved reserves does not explicitly recognize the use of the probabilistic method and in no way allows for the probabilistic method to be used in such a manner as to violate any term of that definition. In this paper, we will first present a short definition of probabilistic analysis and the risks and benefits of using this technique. Next, we will address some significant shortcomings in the current reserves definitions and then present some examples on how some of these shortcomings can be addressed in the evaluation of reserves. Discussion of Probabilistic Analysis of Reserves The probabilistic analysis of reserves relies on the use of probabilistic techniques to estimate the uncertainty of the recoverable hydrocarbon volumes. In its purest sense, these probabilistic methods are used to collect and organize, evaluate, present, and summarize data. These methods provide the tools to analyze large amounts of representative data so that the significance of that data's variability and dependability can be measured and understood. Probabilistic analysis should be considered an important tool for internal analysis, allowing companies to understand and rank their hydrocarbon reserves and resources and the associated risks. This method provides the tools to identify the upside and the downside hydrocarbon potential to better organize the company's portfolio and to allocate capital and manpower resources more efficiently. However, it should be understood that the objectives of a hydrocarbon-property ranking study and an SPE/WPC or SEC reserves reporting evaluation might be different. For example, companies may have their own guidelines to group and analyze hydrocarbon assets to allocate company resources or for property acquisitions. These company guidelines may vary from project to project or from year to year (depending on pricing assumptions) and may be different from those guidelines provided in the SPE/ WPC and SEC definitions. It then becomes the primary challenge of the evaluator to reconcile both evaluations.


2013 ◽  
Vol 42 (4) ◽  
pp. 3-9
Author(s):  
Armin Geertz

This introduction to the special issue on narrative discusses various ways of approaching religious narrative. It looks at various evolutionary hypotheses and distinguishes between three fundamental aspects of narrative: 1. the neurobiological, psychological, social and cultural mechanisms and processes, 2. the many media and methods used in human communication, and 3. the variety of expressive genres. The introduction ends with a definition of narrative.


Think India ◽  
2019 ◽  
Vol 22 (3) ◽  
pp. 72-83
Author(s):  
Tushar Kadian

Actually, basic needs postulates securing of the elementary conditions of existence to every human being. Despite of the practical and theoretical importance of the subject the greatest irony is non- availability of any universal preliminary definition of the concept of basic needs. Moreover, this becomes the reason for unpredictability of various political programmes aiming at providing basic needs to the people. The shift is necessary for development of this or any other conception. No labour reforms could be made in history till labours were treated as objects. Its only after they were started being treating as subjects, labour unions were allowed to represent themselves in strategy formulations that labour reforms could become a reality. The present research paper highlights the basic needs of Human Rights in life.


Mediaevistik ◽  
2018 ◽  
Vol 31 (1) ◽  
pp. 366-366
Author(s):  
Albrecht Classen

Eddic poetry constitutes one of the most important genres in Old Norse or Scandinavian literature and has been studied since the earliest time of modern-day philology. The progress we have made in that field is impressive, considering the many excellent editions and translations, not to mention the countless critical studies in monographs and articles. Nevertheless, there is always a great need to revisit, to summarize, to review, and to digest the knowledge gained so far. The present handbook intends to address all those goals and does so, to spell it out right away, exceedingly well. But in contrast to traditional concepts, the individual contributions constitute fully developed critical article, each with a specialized topic elucidating it as comprehensively as possible, and concluding with a section of notes. Those are kept very brief, but the volume rounds it all off with an inclusive, comprehensive bibliography. And there is also a very useful index at the end. At the beginning, we find, following the table of contents, a list of the contributors, unfortunately without emails, a list of translations and abbreviations of the titles of Eddic poems in the Codex Regius and then elsewhere, and a very insightful and pleasant introduction by Carolyne Larrington. She briefly introduces the genre and then summarizes the essential points made by the individual authors. The entire volume is based on the Eddic Network established by the three editors in 2012, and on two workshops held at St. John’s College, Oxford in 2013 and 2014.


Author(s):  
John Hunsley ◽  
Eric J. Mash

Evidence-based assessment relies on research and theory to inform the selection of constructs to be assessed for a specific assessment purpose, the methods and measures to be used in the assessment, and the manner in which the assessment process unfolds. An evidence-based approach to clinical assessment necessitates the recognition that, even when evidence-based instruments are used, the assessment process is a decision-making task in which hypotheses must be iteratively formulated and tested. In this chapter, we review (a) the progress that has been made in developing an evidence-based approach to clinical assessment in the past decade and (b) the many challenges that lie ahead if clinical assessment is to be truly evidence-based.


2021 ◽  
Vol 7 ◽  
pp. 237802312110244
Author(s):  
Katrin Auspurg ◽  
Josef Brüderl

In 2018, Silberzahn, Uhlmann, Nosek, and colleagues published an article in which 29 teams analyzed the same research question with the same data: Are soccer referees more likely to give red cards to players with dark skin tone than light skin tone? The results obtained by the teams differed extensively. Many concluded from this widely noted exercise that the social sciences are not rigorous enough to provide definitive answers. In this article, we investigate why results diverged so much. We argue that the main reason was an unclear research question: Teams differed in their interpretation of the research question and therefore used diverse research designs and model specifications. We show by reanalyzing the data that with a clear research question, a precise definition of the parameter of interest, and theory-guided causal reasoning, results vary only within a narrow range. The broad conclusion of our reanalysis is that social science research needs to be more precise in its “estimands” to become credible.


2021 ◽  
Vol 11 (2) ◽  
Author(s):  
María Jiménez-Buedo

AbstractReactivity, or the phenomenon by which subjects tend to modify their behavior in virtue of their being studied upon, is often cited as one of the most important difficulties involved in social scientific experiments, and yet, there is to date a persistent conceptual muddle when dealing with the many dimensions of reactivity. This paper offers a conceptual framework for reactivity that draws on an interventionist approach to causality. The framework allows us to offer an unambiguous definition of reactivity and distinguishes it from placebo effects. Further, it allows us to distinguish between benign and malignant forms of the phenomenon, depending on whether reactivity constitutes a danger to the validity of the causal inferences drawn from experimental data.


2002 ◽  
Vol 21 (2) ◽  
pp. 103-104 ◽  
Author(s):  
G Carelli ◽  
I Iavicoli

The authors comment on Calabrese and Baldwin's paper ‘Defining Hormesis’, which, to date, is the first attempt to provide a definition of hormesis that goes beyond the different interpretations of this phenomenon reported in the literature. While appreciating the effort made in this study to place hormesis in a general and at the same time specific context, the authors believe some clarifications are needed as regards the quantitative features of this phenomenon. In this connection, they speculate on whether Calabrese and Baldwin think it appropriate to include hormesis assessment criteria in the document, referring in particular to those reported in a previous paper. The authors share Calabrese and Baldwin's conclusion that future experimental models designed to study hormetic phenomena must necessarily include the time factor, which not only guarantees this phenomenon will be detected, but is also able to detect the specific type of hormesis.


1956 ◽  
Vol 21 ◽  
pp. 156-159
Author(s):  
O. G. S. Crawford

The prudent contributor to a Festschrift will select some subject about which he thinks he knows as much as the professor who is to receive it. That is peculiarly difficult here because of the vast range of Professor Childe's knowledge, both in time and space, far exceeding the present contributor's. This Note is offered as a grateful tribute from one of the many who have been intellectually enriched by his writings and encouraged by his devotion to scholarship. It is little more than an amplification and criticism of the Abbé Breuil's classic Presidential Address to the Prehistoric Society of East Anglia, delivered in 1934; but on the strength of observations made in August and September, 1955, I have come to different conclusions.The Abbé Breuil detected five successive techniques, all of them found on the stones of the Boyne Tombs:(1) Incised thin lines (pl. XIX, B).(2) Picked grooves left rough (pl. XVIII).(3, a) Picked grooves afterwards rubbed smooth; in this and the preceding group ‘it is invariably the line (groove) itself on which the pattern depends, which gives and is the design’.(3, b) Picked areas which ‘only define the limits of the pattern, the surface, left in relief by the cutting down of the background, constituting the actual design’ (pl. xx, B).(4) Rectilinear patterns where also the pattern is residual, consisting of raised ribs, forming triangles or lozenges, left standing by picking away the surrounding surface (pl. xx, A).


2000 ◽  
Vol 29 (4) ◽  
pp. 477-517 ◽  
Author(s):  
MARGRET SELTING

The notion of Turn-Constructional Unit (TCU) in Conversation Analysis has become unclear for many researchers. The underlying problems inherent in the definition of this notion are here identified, and a possible solution is suggested. This amounts to separating more clearly the notions of TCU and Transition Relevance Place (TRP). In this view, the TCU is defined as the smallest interactionally relevant complete linguistic unit, in a given context, that is constructed with syntactic and prosodic resources within their semantic, pragmatic, activity-type-specific, and sequential conversational context. It ends in a TRP unless particular linguistic and interactional resources are used to project and postpone the TRP to the end of a larger multi-unit turn. This suggestion tries to spell out some of the assumptions that the seminal work in CA made in principle, but never formulated explicitly.


Sign in / Sign up

Export Citation Format

Share Document