Adapting Probabilistic Methods To Conform to Regulatory Guidelines

2002 ◽  
Vol 5 (04) ◽  
pp. 302-310
Author(s):  
Herman G. Acuna ◽  
D.R. Harrell

Summary Probabilistic methods have introduced inconsistent interpretations of how they should be applied while still complying with reserves certification guidelines. The objective of this paper is to present and discuss some pitfalls commonly encountered in the application of probabilistic methods to evaluate reserves. Several regulatory guidelines that should be followed during the generation of recoverable hydrocarbon distributions are discussed. An example also is given to understand the evolution of reserves categories as a function of probabilities. Most of the conflicting reserves interpretations can be attributed to the constraints of regulatory bodies [e.g., the U.S. Securities and Exchange Commission (SEC)] and the current SPE/World Petroleum Congresses (WPC) reserves definitions in which reserves categories are expressed in terms of the probabilities of being achieved. For example, proved reserves are defined as those hydrocarbon volumes with at least a 90% probability of being equaled or exceeded (P90). Unfortunately, these definitions alone fall short as guidance on how to derive the distributions from which these percentiles will be calculated. This may lead to distributions that do not comply with the remaining guidelines. While a P90 can be calculated from a noncomplying distribution, proved reserves may not be assigned at this percentile level. Introduction In 1997, new reserves definitions were drafted and introduced by SPE and WPC. For the first time, these reserves definitions included some language to address the increased interest in probabilistic analysis to estimate hydrocarbon reserves. Proved reserves were defined, in part, as those volumes of recoverable hydrocarbons with " . . . a high degree of confidence that the quantities will be recovered. If probabilistic methods are used, there should be at least a 90% probability that the quantities actually recovered will equal or exceed the estimate."1 The interpretation of this definition may be that satisfying the P90 criteria is sufficient to define proved reserves. We will discuss later in this paper why defining proved reserves as the P90 of any distribution is not always appropriate. Also, the definitions do not specify at what level the evaluator should apply the P90 test (i.e., is it at the field level or the total portfolio level?). These points are further clarified in the 2001 update of the SPE/WPC definitions.2 Probable reserves were then described in the SPE/WPC definitions as those recoverable hydrocarbon volumes that " . . . are more likely than not to be recoverable. In this context, when probabilistic methods are used, there should be at least a 50% probability that the quantities actually recovered will equal or exceed the sum of estimated proved plus probable reserves."1 Possible reserves were defined as those recoverable hydrocarbon volumes that " . . . are less likely to be recoverable than probable reserves. In this context, when probabilistic methods are used, there should be at least a 10% probability that the quantities actually recovered will equal or exceed the sum of estimated proved plus probable plus possible reserves."1 The SEC does not recognize probable and possible reserves. The SEC's guidelines for reporting proved reserves are set forth in its Regulation S-X, Rule 4-10 and subsequent clarifying bulletins. In Regulation S-X, Rule 4-10, there are no guidelines for the interpretation of probabilistic analysis. The regulation defines proved reserves as those recoverable hydrocarbon volumes with " . . . reasonable certainty to be recoverable in future years from known reservoirs . . ."3 Both the SPE/WPC and SEC proved reserves definitions have several other requirements that are usually applicable to deterministic methods that may conflict with probabilistic analysis if not properly incorporated. Evaluators of reserves should exercise caution when using probabilistic methods to ensure compliance with the reserves definitions adopted by the SEC and SPE/WPC. Caution is required because there are certain situations in which indiscriminate application of probabilistic methods may produce results that are inconsistent with the reserves definitions. For example, the SEC definition of proved reserves does not explicitly recognize the use of the probabilistic method and in no way allows for the probabilistic method to be used in such a manner as to violate any term of that definition. In this paper, we will first present a short definition of probabilistic analysis and the risks and benefits of using this technique. Next, we will address some significant shortcomings in the current reserves definitions and then present some examples on how some of these shortcomings can be addressed in the evaluation of reserves. Discussion of Probabilistic Analysis of Reserves The probabilistic analysis of reserves relies on the use of probabilistic techniques to estimate the uncertainty of the recoverable hydrocarbon volumes. In its purest sense, these probabilistic methods are used to collect and organize, evaluate, present, and summarize data. These methods provide the tools to analyze large amounts of representative data so that the significance of that data's variability and dependability can be measured and understood. Probabilistic analysis should be considered an important tool for internal analysis, allowing companies to understand and rank their hydrocarbon reserves and resources and the associated risks. This method provides the tools to identify the upside and the downside hydrocarbon potential to better organize the company's portfolio and to allocate capital and manpower resources more efficiently. However, it should be understood that the objectives of a hydrocarbon-property ranking study and an SPE/WPC or SEC reserves reporting evaluation might be different. For example, companies may have their own guidelines to group and analyze hydrocarbon assets to allocate company resources or for property acquisitions. These company guidelines may vary from project to project or from year to year (depending on pricing assumptions) and may be different from those guidelines provided in the SPE/ WPC and SEC definitions. It then becomes the primary challenge of the evaluator to reconcile both evaluations.

2014 ◽  
Vol 54 (2) ◽  
pp. 518
Author(s):  
Douglas Peacock

Estimation and reporting of unconventional hydrocarbon reserves and resources have been a subject of intense focus in recent years. As unconventional hydrocarbons become increasingly important, it is essential that practices keep pace with a rapidly changing industry. The PRMS was primarily developed for conventional hydrocarbons although it is applicable to all accumulations, including unconventional. Force-fitting the PRMS for use in unconventional reservoirs is problematic. Many key areas would benefit from better definition and guidance. These areas include: assessment and reporting of prospective resources, definition of a prospect, risk assessment, definition of a discovery, extent of discovery, and linkage of reserves to the definition of project. Unconventional gas developments for LNG export present particular challenges. One primary purpose of reserves and resources definitions is to provide consistency of terminology and reporting for all parties involved including operators, investors, governments, and regulatory bodies. Within the industry, there is widespread acceptance that unconventional hydrocarbons are different, not only in how they are developed but also in how reserves and resources are evaluated and reported. Present practices may not fit neatly into the PRMS requirements, so compromises must be made. In particular, the PRMS axes of risk and uncertainty become blurred with present unconventional practices. This extended abstract highlights the many issues that make estimation and reporting of unconventional resources problematic within the PRMS and it suggests possible solutions to enable a more appropriate set of definitions and guidelines to be prepared.


Author(s):  
Dan Vlaicu

In this work is presented the development of generic models that emulates the behavior of finite element models under cyclic loads, with the probabilistic representation based on samplings of base-model data for a variety of test cases. The base-model is a pipe with a notch subjected to pressure loading translated into hoop stress and the thermal loading is applied as a cyclic load through the pipe thickness. The probabilistic method takes variations of the nonlinear material properties, loading conditions, and geometrical dimensions, whereas the response variables are defined in terms of stress intensity for the static analyses, and the total accumulated strain as well as the strain ranges translated into the number of allowable load cycles by using the Manson’s common slope method define the response variables for the nonlinear calculations. Bree diagram converted into the Interaction Diagram is used to correlate the results of the nonlinear cyclic analyses and the ASME Code limits for primary and secondary loads from linear elastic analyses, whereas the definition of the shakedown towards of the steady cycle is identified in terms of the local and global components of strain. Furthermore, the Bayesian statistics expands the results of the nonlinear cyclic analysis by combining the interpretations of statistical results to scenarios either not accessible by the frequentist statistics or better served by complex stochastic models.


1974 ◽  
Vol 18 (03) ◽  
pp. 203-213
Author(s):  
Alaa Mansour

Recently, classification societies have taken an interest in the statistical approach to ship longitudinal strength and the probabilistic methods for defining new design criteria. This interest is partially a result of the development and the growing demand for large tankers; it is reflected in the increasing research done by the regulatory bodies in these areas. This paper presents a framework for an approximate probabilistic method to calculate ship longitudinal strength with possible implementation in classification societies rules in mind. The most important features of this method are that it is simple, realistic, consistent, and distribution-free. No assumptions are made with regard to the forms of the distribution functions of the random variables involved in the procedure. Nevertheless, the uncertainties associated with these variables are taken into consideration and are quantified by their coefficients of variation. Analyses of eighteen ships of different types are made in order to serve as a preliminary investigation of the appropriate level of safety as measured by a proposed safety index.


2011 ◽  
Vol 70 (10) ◽  
pp. 1713-1718 ◽  
Author(s):  
P Miossec ◽  
C L Verweij ◽  
L Klareskog ◽  
C Pitzalis ◽  
A Barton ◽  
...  

Rheumatoid arthritis (RA) is one of the most appropriate conditions for the application of personalised medicine as a high degree of heterogeneity has been recognised, which remains to be explained. Such heterogeneity is also reflected in the large number of treatment targets and options. A growing number of biologics as well as small molecules are already in use and there are promising new drugs in development. In order to make the best use of treatment options, both targeted and non-targeted biomarkers have to be identified and validated. To this aim, new rules are needed for the interaction between academia and industry under regulatory control. Setting up multi-centre biosample collections with clear definition of access, organising early, possibly non-committing discussions with regulatory authorities, and defining a clear route for the validation, qualification and registration of the biomarker–drug combination are some of the more critical areas where effective collaboration between the drug industry, academia and regulators is needed.


2020 ◽  
Vol 2020 (9) ◽  
pp. 29-33
Author(s):  
Sergey Bulatov

The paper purpose is the effectiveness estimation in the technological equipment use, taking into account its reliability and productivity for defective transmission units of buses. The problem consists in the determination of time to be spent on repair of bus transmission units taking into account technological equipment reliability. In the paper there is used a probabilistic method for the prediction bus transmission units, and also a method of the dynamics of averages which allow ensuring minimum of costs for units downtime during repair and equipment cost. The need for repair of transmission units (gear box) arises on an average after 650 hours, the average productivity of the bench makes 4.2 bus / hour. The bench fails on the average after 4600 hours of work, the average time of the bench makes 2 hours. In such a way the solution of the problem specified allows analyzing the necessity of time decrease for transmission unit repair to avoid long downtimes of buses in repair areas without negative impact upon high repair quality and safety during the further operation.


2006 ◽  
Vol 25 (2) ◽  
pp. 41-51 ◽  
Author(s):  
Sharad Asthana ◽  
Jayanthi Krishnan

Corporate disclosures of auditor fees (beginning in February 2001) caused considerable concern among regulators and investors about auditor independence because they revealed that nonaudit fees were a substantial proportion of total auditor fees. However, in 2003 the Securities and Exchange Commission (SEC) introduced revised disclosure requirements, specifying a broader definition of audit fees, and additional fee categories (SEC 2003). About 31 percent of our sample firms adopted the new rules in advance of the required date. We investigate the pattern of early adoption of the new fee disclosure rules by companies. Our results indicate that companies with greater nonaudit fee ratios during the prior year, companies that could show a greater decline in nonaudit fee ratios due to reclassification under SEC (2003), and companies that had greater audit-related fees after the reclassification were likely to adopt the new rules early. We conjecture that companies that had the most to gain from reclassifying fees—possibly by reducing negative investor perceptions about nonaudit services—adopted the new rules earlier than required.


2020 ◽  
Vol 12 (17) ◽  
pp. 2809
Author(s):  
Meirman Syzdykbayev ◽  
Bobak Karimi ◽  
Hassan A. Karimi

Detection of terrain features (ridges, spurs, cliffs, and peaks) is a basic research topic in digital elevation model (DEM) analysis and is essential for learning about factors that influence terrain surfaces, such as geologic structures and geomorphologic processes. Detection of terrain features based on general geomorphometry is challenging and has a high degree of uncertainty, mostly due to a variety of controlling factors on surface evolution in different regions. Currently, there are different computational techniques for obtaining detailed information about terrain features using DEM analysis. One of the most common techniques is numerically identifying or classifying terrain elements where regional topologies of the land surface are constructed by using DEMs or by combining derivatives of DEM. The main drawbacks of these techniques are that they cannot differentiate between ridges, spurs, and cliffs, or result in a high degree of false positives when detecting spur lines. In this paper, we propose a new method for automatically detecting terrain features such as ridges, spurs, cliffs, and peaks, using shaded relief by controlling altitude and azimuth of illumination sources on both smooth and rough surfaces. In our proposed method, we use edge detection filters based on azimuth angle on shaded relief to identify specific terrain features. Results show that the proposed method performs similar to or in some cases better (when detecting spurs than current terrain features detection methods, such as geomorphon, curvature, and probabilistic methods.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Ruaridh A. Clark ◽  
Malcolm Macdonald

AbstractContact networks provide insights on disease spread due to the duration of close proximity interactions. For systems governed by consensus dynamics, network structure is key to optimising the spread of information. For disease spread over contact networks, the structure would be expected to be similarly influential. However, metrics that are essentially agnostic to the network’s structure, such as weighted degree (strength) centrality and its variants, perform near-optimally in selecting effective spreaders. These degree-based metrics outperform eigenvector centrality, despite disease spread over a network being a random walk process. This paper improves eigenvector-based spreader selection by introducing the non-linear relationship between contact time and the probability of disease transmission into the assessment of network dynamics. This approximation of disease spread dynamics is achieved by altering the Laplacian matrix, which in turn highlights why nodes with a high degree are such influential disease spreaders. From this approach, a trichotomy emerges on the definition of an effective spreader where, for susceptible-infected simulations, eigenvector-based selections can either optimise the initial rate of infection, the average rate of infection, or produce the fastest time to full infection of the network. Simulated and real-world human contact networks are examined, with insights also drawn on the effective adaptation of ant colony contact networks to reduce pathogen spread and protect the queen ant.


2021 ◽  
Vol 22 (9) ◽  
pp. 4707
Author(s):  
Mariana Lopes ◽  
Sandra Louzada ◽  
Margarida Gama-Carvalho ◽  
Raquel Chaves

(Peri)centromeric repetitive sequences and, more specifically, satellite DNA (satDNA) sequences, constitute a major human genomic component. SatDNA sequences can vary on a large number of features, including nucleotide composition, complexity, and abundance. Several satDNA families have been identified and characterized in the human genome through time, albeit at different speeds. Human satDNA families present a high degree of sub-variability, leading to the definition of various subfamilies with different organization and clustered localization. Evolution of satDNA analysis has enabled the progressive characterization of satDNA features. Despite recent advances in the sequencing of centromeric arrays, comprehensive genomic studies to assess their variability are still required to provide accurate and proportional representation of satDNA (peri)centromeric/acrocentric short arm sequences. Approaches combining multiple techniques have been successfully applied and seem to be the path to follow for generating integrated knowledge in the promising field of human satDNA biology.


2021 ◽  
Vol 9 (6) ◽  
pp. 667
Author(s):  
Dracos Vassalos ◽  
M. P. Mujeeb-Ahmed

The paper provides a full description and explanation of the probabilistic method for ship damage stability assessment from its conception to date with focus on the probability of survival (s-factor), explaining pertinent assumptions and limitations and describing its evolution for specific application to passenger ships, using contemporary numerical and experimental tools and data. It also provides comparisons in results between statistical and direct approaches and makes recommendations on how these can be reconciled with better understanding of the implicit assumptions in the approach for use in ship design and operation. Evolution over the latter years to support pertinent regulatory developments relating to flooding risk (safety level) assessment as well as research in this direction with a focus on passenger ships, have created a new focus that combines all flooding hazards (collision, bottom and side groundings) to assess potential loss of life as a means of guiding further research and developments on damage stability for this ship type. The paper concludes by providing recommendations on the way forward for ship damage stability and flooding risk assessment.


Sign in / Sign up

Export Citation Format

Share Document