Rapid Analysis Tool for High-Level Risk Assessment of Pipelines

Author(s):  
Iain R. Colquhoun ◽  
Evelyn Choong ◽  
Richard Kania ◽  
Ming Gao ◽  
Pat Wickenhauser

When the benefits of using risk-based decision making in pipeline integrity management programs have been identified, operators are immediately faced with the challenge of large amounts of risk analysis work. This work frequently has to be done with minimum resources and/or in logistic situations that require a graduated approach extending over several years. In answering this challenge, a starting point must be identified that focuses resources where the risks are greatest. Since these locations are generally unknown in the first instance, the need exists to have a tool available to perform a first or high-level assessment to identify areas requiring further or more detailed study to support the integrity management program. The need also exists to have a robust tool that can be used to direct the assessments of smaller lines that might not require the detailed attention generally given to larger diameter transmission lines. This paper describes the extension of a simple indexing methodology comprising both theoretical and historical components to produce such a tool. It describes the use of so-called “smart” defaults to account for missing data, and a rudimentary decision model that can be used to grade the risk results. Examples are given of applications of the methodology to a gathering system and to the high-level evaluation of a transmission system. The paper also compares the results obtained to other, more detailed methodologies.

Author(s):  
Yong-Yi Wang ◽  
Don West ◽  
Douglas Dewar ◽  
Alex McKenzie-Johnson ◽  
Millan Sen

Ground movements, such as landslides and subsidence/settlement, can pose serious threats to pipeline integrity. The consequence of these incidents can be severe. In the absence of systematic integrity management, preventing and predicting incidents related to ground movements can be difficult. A ground movement management program can reduce the potential of those incidents. Some basic concepts and terms relevant to the management of ground movement hazards are introduced first. A ground movement management program may involve a long segment of a pipeline that may have a threat of failure in unknown locations. Identifying such locations and understanding the potential magnitude of the ground movement is often the starting point of a management program. In other cases, management activities may start after an event is known to have occurred. A sample response process is shown to illustrate key considerations and decision points after the evidence of an event is discovered. Such a process can involve fitness-for-service (FFS) assessment when appropriate information is available. The framework and key elements of FFS assessment are explained, including safety factors on strain capacity. The use of FFS assessment is illustrated through the assessment of tensile failure mode. Assessment models are introduced, including key factors affecting the outcome of an assessment. The unique features of girth welds in vintage pipelines are highlighted because the management of such pipelines is a high priority in North America and perhaps in other parts of the worlds. Common practice and appropriate considerations in a pipeline replacement program in areas of potential ground movement are highlighted. It is advisable to replace pipes with pipes of similar strength and stiffness so the strains can be distributed as broadly as possible. The chemical composition of pipe steels and the mechanical properties of the pipes should be such that the possibility of HAZ softening and weld strength undermatching is minimized. In addition, the benefits and cost of using the workmanship flaw acceptance criteria of API 1104 or equivalent standards in making repair and cutout decisions of vintage pipelines should be evaluated against the possible use of FFS assessment procedures. FFS assessment provides a quantifiable performance target which is not available through the workmanship criteria. However, necessary inputs to perform FFS assessment may not be readily available. Ongoing work intended to address some of the gaps is briefly described.


Author(s):  
Martin Zaleski ◽  
Tom Greaves ◽  
Jan Bracic

The Canadian Standards Association’s Publication Z662-07, Annex N provides guidelines for pipeline integrity management programs. Government agencies that regulate pipelines in Alberta, British Columbia and other Canadian jurisdictions are increasingly using Annex N as the standard to which pipeline operators are held. This paper describes the experience of Pembina Pipeline Corporation (Pembina) in implementing a geohazards management program to fulfill components of Annex N. Central to Pembina’s program is a ground-based inspection program that feeds a geohazards database designed to store geotechnical and hydrotechnical site information and provide relative rankings of geohazard sites across the pipeline network. This geohazard management program fulfills several aspects of the Annex, particularly: record keeping; hazard identification and assessment; risk assessment and reduction; program planning; inspections and monitoring; and mitigation. Pembina’s experience in growing their geohazard inventory from 65 known sites to over 1300 systematically inspected and catalogued sites in a span of approximately two years is discussed. Also presented are methods by which consultants and Pembina personnel contribute to the geohazard inspection program and geohazard inventory, and how the ground inspection observations trigger follow-up inspections, monitoring and mitigation activities.


Author(s):  
Len LeBlanc ◽  
Walter Kresic ◽  
Sean Keane ◽  
John Munro

This paper describes the integrity management framework utilized within the Enbridge Liquids Pipelines Integrity Management Program. The role of the framework is to provide the high-level structure used by the company to prepare and demonstrate integrity safety decisions relative to mainline pipelines, and facility piping segments where applicable. The scope is directed to corrosion, cracking, and deformation threats and all variants within those broad categories. The basis for the framework centers on the use of a safety case to provide evidence that the risks affecting the system have been effectively mitigated. A ‘safety case’, for the purposes of this methodology is defined as a structured argument demonstrating that the evidence is sufficient to show that the system is safe.[1] The decision model brings together the aspects of data integration and determination of maintenance timing; execution of prevention, monitoring, and mitigation; confirmation that the execution has met reliability targets; application of additional steps if targets are not met; and then the collation of the results into an engineering assessment of the program effectiveness (safety case). Once the program is complete, continuous improvement is built into the next program through the incorporation of research and development solutions, lessons learned, and improvements to processes. On the basis of a wide range of experiences, investigations and research, it was concluded that there are combinations of monitoring and mitigation methods required in an integrity program to effectively manage integrity threats. A safety case approach ultimately provides the structure for measuring the effectiveness of integrity monitoring and mitigation efforts, and the methodology to assess whether a pipeline is sufficiently safe with targets for continuous improvement. Hence, the need for the safety case is to provide transparent, quantitative integrity program performance results which are continually improved upon through ongoing revalidations and improvement to the methods utilized. This enables risk reduction, better stakeholder awareness, focused innovation, opportunities for industry information sharing along with other benefits.


Author(s):  
Jean A. Garrison

The core decision-making literature argues that leaders and their advisors operate within a political and social context that determines when and how they matter to foreign policy decision making. Small groups and powerful leaders become important when they have an active interest in and involvement with the issue under discussion; when the problem is perceived to be a crisis and important to the future of the regime; in novel situations requiring more than simple application of existing standard operating procedures; and when high-level diplomacy is involved. Irving Janis’s groupthink and Graham Allison’s bureaucratic politics serve as the starting point in the study of small groups and foreign policy decision making. There are three distinct structural arrangements of decision groups: formalistic/hierarchical, competitive, and collegial advisory structures, which vary based on their centralization and how open they are to the input of various members of the decision group. Considering the leader, group members, and influence patterns, it is possible to see that decision making within a group rests on the symbiotic relationship between the leader and members of the group or among group members themselves. Indeed, the interaction among group members creates particular patterns of behavior that affect how the group functions and how the policy process will evolve and likely influence policy outcomes. Ultimately, small group decision making must overcome the consistent challenge to differentiate its role in foreign policy analysis from other decision units and expand further beyond the American context.


Author(s):  
Alejandro Reyes ◽  
Otto Huisman

Workflows are the fundamental building blocks of business processes in any organization today. These workflows have attributes and outputs that make up various Operational, Management and Supporting processes, which in turn produce a specific outcome in the form of business value. Risk Assessment and Direct Assessment are examples of such processes; they define the individual tasks integrity engineers should carry out. According to ISO 55000, achieving excellence in Asset Management requires clearly defined objectives, transparent and consistent decision making, as well as a long-term strategic view. Specifically, it recommends well-defined policies and procedures (processes) to bring about performance and cost improvements, improved risk management, business growth and enhanced stakeholder confidence through compliance and improved reputation. In reality, such processes are interpreted differently all over the world, and the workflows that make up these processes are often defined by individual engineers and experts. An excellent example of this is Risk Assessment, where significant local variations in data sources, threat sources and other data elements, require the business to tailor its activities and models used. Successful risk management is about enabling transparent decision-making through clearly defined process-steps, but in practice it requires maintaining a degree of flexibility to tailor the process to the specific organizational needs. In this paper, we introduce common building blocks that have been identified to make up a Risk Assessment process and further examine how these blocks can be connected to fulfill the needs of multiple stakeholders, including data administrators, integrity engineers and regulators. Moving from a broader Business Process view to a more focused Integrity Management view, this paper will demonstrate how to formalize Risk Assessment processes by describing the activities, steps and deliverables of each using Business Process Model and Notation (BPMN) as the standard modeling technique and extending it with an integrity-specific notation we have called Integrity Modelling Language or IML. It is shown that flexible modelling of integrity processes based on existing standards and best practices is possible within a structured approach; one which guides users and provides a transparent and auditable process inside the organization and beyond, based on commonalities defined by best practice guidelines, such as ISO 55000.


Author(s):  
David Mangold ◽  
W. Kent Muhlbauer ◽  
Jim Ponder ◽  
Tony Alfano

Risk management of pipelines is a complex challenge due to the dynamic environment of the real world coupled with a wide range of system types installed over many decades. Various methods of risk assessment are currently being used in industry, many of which utilize relative scoring. These assessments are often not designed for the new integrity management program (IMP) requirements and are under direct challenge by regulators. SemGroup had historically used relative risk assessment methodologies to help support risk management decision-making. While the formality offered by these early methods provided benefits, it was recognized that, in order to more effectively manage risk and better meet the United States IMP objectives, a more effective risk assessment would be needed. A rapid and inexpensive migration into a better risk assessment platform was sought. The platform needed to be applicable not only to pipeline miles, but also to station facilities and all related components. The risk results had to be readily understandable and scalable, capturing risks from ‘trap to trap’ in addition to risks accompanying each segment. The solution appeared in the form a quantitative risk assessment that was ‘physics based’ rather than the classical statistics based QRA. This paper will outline the steps involved in this transition process and show how quantitative risk assessment may be efficiently implemented to better guide integrity decision-making, illustrated with a case study from SemGroup.


2020 ◽  
Vol 39 (5) ◽  
pp. 5987-5997
Author(s):  
Sezi Cevik Onar ◽  
Cengiz Kahraman ◽  
Basar Oztaysi

The catastrophes due to widespread outbreaks create a long-standing distraction and have an accelerating transmission. The uncontrolled outbreaks cause not only health-related problems but also supply chain related problems. The outbreak caused by the coronavirus (COVID-19) shows how vulnerable the Healthcare systems and the supporting systems such as supply chains of the countries to such type of disasters. Keeping high levels of inventory, especially for healthcare products, can be beneficial to overcome such shortage problems. Nevertheless, keeping a high level of inventory can be costly, and the durability of the products creates a limit. The decision-makers have to carefully decide the inventory levels by considering many factors such as the criticality of the product and the easiness of producing the product. In this study, we try to develop a decision model for defining the inventory levels in Healthcare systems by considering multiple scenarios such as outbreaks. A novel spherical regret based multi-criteria decision-making approach is developed and used for evaluating the total regret of not keeping stock of the healthcare equipment.


Author(s):  
Robert A. McElroy

Recently enacted U.S. regulations will require distribution system operators to develop Distribution Integrity Management Programs (DIMP). The purpose of this regulation is to reduce system operating risks and the probability of failure by requiring operators to establish a documented, systematic approach to evaluating and managing risks associated with their pipeline systems. Distribution Integrity Management places new and significant requirements on distribution operators’ Geographic Information System (GIS). Operators already gather much of the data needed for meeting this regulation. The challenge lies in efficiently and accurately integrating and evaluating all system data so operators can identify and implement measures to address risks, monitor progress and report on results. Similar to the role geospatial solutions played in helping transmission pipeline operators meet Integrity Management Program requirements, this paper will discuss the role GIS can play in helping operators meet the DIMP regulations. Data requirements, storage and integration will also be presented. The paper will give examples of how risk-based decision making can improve operational efficiency and resource allocation.


Author(s):  
Louis Fenyvesi ◽  
Brian Rothwell ◽  
Iain Colquhoun

Typical risk assessment processes produce risk estimates by multiplying together single-valued, expected failure frequencies and associated consequences. However, a range of consequences can result from an incident, and a more representative estimate of failure frequency is captured by a distributed variable rather than by a single point value. Risk estimates calculated by typical assessment processes are sometimes referred to as “mean” estimates or “cautious best estimates”. This terminology acknowledges implicitly that there is truly a range of possible values. Meta-risk is a potential approach for analyzing risk that captures this uncertainty by utilizing distributions of failure frequency and consequence in place of point estimates. These distributions are combined to form a risk distribution that can then be used more directly in quantified decision making. Meta-risk improves on the principle of “As low as reasonably practicable” (ALARP) by acknowledging that the levels of uncertainty associated with models used in the risk assessment process are not equal. By providing “probability of exceedance” targets relative to defined risk acceptance criteria, the meta-risk approach allows for quantified decision making that addresses both the level of risk and the associated level of uncertainty. This process allows an analyst to compare risks more accurately from multiple hazards between which levels of uncertainty may vary greatly, and to quantify the benefits of integrity management strategies such as condition monitoring whose primary effect is to reduce uncertainty rather than to reduce risk directly.


Author(s):  
Min An ◽  
Yong Qin

Railway safety is a very complicated subject, which is determined by numerous aspects. Many of qualitative and quantitative railway safety and risk analysis techniques and methods are used in the industry. But, however, the railway industry faces problems and challenges on how to apply these techniques and methods effectively and efficiently, particularly in the circumstances where the risk data are incomplete or there is a high level of uncertainty involved in the risk data. This chapter approaches these subjects to discuss the problems and challenges of railway safety and risk analysis methods in dealing with uncertainties, and those growing needs of the industry. A well-established technique is also introduced in this chapter which can be used to identify major hazards and evaluate both qualitative and quantitative risk data, and information associated with railway operation efficiently and effectively in an acceptable way in various environments.


Sign in / Sign up

Export Citation Format

Share Document