Utilizing Modern Data and Technologies for Pipeline Risk Assessment

Author(s):  
David Mangold ◽  
Ryan Huntley

Risk management of gas and hazardous liquid pipeline systems is a core element of US integrity management regulations (49 CFR part 192, subpart O; 49 CFR 195.452) and a challenging responsibility facing operators worldwide. The importance of health, safety, and environmental protection demands a continuous evolution of industry expectations, practices, and regulations, with regulators and operators invariably seeking ways to advance risk modeling methodologies for pipeline risk assessment. The evolution to more advanced risk modeling methodologies marks a transitional trend from simple, relative risk models to robust, quantitative risk models. A common challenge when implementing a more advanced risk model is supplying sufficient supporting data. This challenge highlights a necessary, parallel progression; the expansion of data availability and improvement of data quality to support risk management. Many data resources have become available to aid this progression through advancements in spatial processing, computational technologies, and data collection and availability across industries. Powerful analysis tools are now available to supply pressure loss, overland flow, liquid trace, and gas dispersion information to enhance risk modeling, along with a vast and increasing amount of publicly available data and statistics. Proper integration of this information can greatly reduce the challenges associated with the implementation of quantitative risk assessment and better support risk-based decision making. This paper details the availability and utilization of modem data and technologies for pipeline risk assessment. Examples are provided which illustrate the integration of data and technology resources to support a robust, quantitative risk model.

Author(s):  
Jane Dawson ◽  
Iain Colquhoun ◽  
Inessa Yablonskikh ◽  
Russell Wenz ◽  
Tuan Nguyen

Current risk assessment practice in pipeline integrity management tends to use semi-quantitative index-based or model-based methodologies. This approach has been found to be very flexible and provide useful results for identifying high-risk areas and for prioritizing physical integrity assessments. However, as pipeline operators progressively adopt an operating strategy of continual risk reduction with a view to minimizing total expenditures within safety, environmental, and reliability constraints, the need for quantitative assessments of risk levels is becoming evident. Whereas reliability-based quantitative risk assessments can be and are routinely carried out on a site-specific basis, they require significant amounts of quantitative data for the results to be meaningful. This need for detailed and reliable data tends to make these methods unwieldy for system-wide risk assessment applications. This paper describes methods for estimating risk quantitatively through the calibration of semi-quantitative estimates to failure rates for peer pipeline systems. By applying point value probabilities to the failure rates, deterministic quantitative risk assessment (QRA) provide greater rigor and objectivity than can usually be achieved through the implementation of semi-quantitative risk assessment results. The method permits a fully quantitative approach to suit the operator’s data availability and quality, and analysis needs. The paper also discusses experiences of implementing this type of risk model in Pipeline Integrity Management System (PIMS) software and the use of and integration of data via existing pipeline geographical information systems (GIS).


2018 ◽  
Vol 46 (2) ◽  
pp. 185-209 ◽  
Author(s):  
Laurel Eckhouse ◽  
Kristian Lum ◽  
Cynthia Conti-Cook ◽  
Julie Ciccolini

Scholars in several fields, including quantitative methodologists, legal scholars, and theoretically oriented criminologists, have launched robust debates about the fairness of quantitative risk assessment. As the Supreme Court considers addressing constitutional questions on the issue, we propose a framework for understanding the relationships among these debates: layers of bias. In the top layer, we identify challenges to fairness within the risk-assessment models themselves. We explain types of statistical fairness and the tradeoffs between them. The second layer covers biases embedded in data. Using data from a racially biased criminal justice system can lead to unmeasurable biases in both risk scores and outcome measures. The final layer engages conceptual problems with risk models: Is it fair to make criminal justice decisions about individuals based on groups? We show that each layer depends on the layers below it: Without assurances about the foundational layers, the fairness of the top layers is irrelevant.


Author(s):  
K. Madhu Kishore Raghunath ◽  
S. L. Tulasi Devi ◽  
Chandra Sekhar Patro

World is vicinity full of opportunities given the amount of economic and non-economic transactions taking place every moment. With ubiquitous opportunities all around, businesses can assume inherent risk everywhere in one or the other way. In this chapter, the authors have deliberated the general business scenario to prove the given inferences. The readers will come across why the risk management is gaining so much gravity and across risk strategy of top business players. The chapter will bring into light the various risk factors in business and study the various risk assessment models present to fortify the negativity of these risk factors. Simultaneously, the authors will draw empirical evidence on the effectiveness, qualitative and quantitative risk models have on risk factors in public and private business organisations.


Author(s):  
Gencer Erdogan ◽  
Phu H. Nguyen ◽  
Fredrik Seehusen ◽  
Ketil Stølen ◽  
Jon Hofstad ◽  
...  

Risk-driven testing and test-driven risk assessment are two strongly related approaches, though the latter is less explored. This chapter presents an evaluation of a test-driven security risk assessment approach to assess how useful testing is for validating and correcting security risk models. Based on the guidelines for case study research, two industrial case studies were analyzed: a multilingual financial web application and a mobile financial application. In both case studies, the testing yielded new information, which was not found in the risk assessment phase. In the first case study, new vulnerabilities were found that resulted in an update of the likelihood values of threat scenarios and risks in the risk model. New vulnerabilities were also identified and added to the risk model in the second case study. These updates led to more accurate risk models, which indicate that the testing was indeed useful for validating and correcting the risk models.


2016 ◽  
pp. 1016-1037
Author(s):  
Gencer Erdogan ◽  
Fredrik Seehusen ◽  
Ketil Stølen ◽  
Jon Hofstad ◽  
Jan Øyvind Aagedal

The authors present the results of an evaluation in which the objective was to assess how useful testing is for validating and correcting security risk models. The evaluation is based on two industrial case studies. In the first case study the authors analyzed a multilingual financial Web application, while in the second case study they analyzed a mobile financial application. In both case studies, the testing yielded new information which was not found in the risk assessment phase. In particular, in the first case study, new vulnerabilities were found which resulted in an update of the likelihood values of threat scenarios and risks in the risk model. New vulnerabilities were also identified and added to the risk model in the second case study. These updates led to more accurate risk models, which indicate that the testing was indeed useful for validating and correcting the risk models.


2018 ◽  
Vol 10 (9) ◽  
pp. 3239 ◽  
Author(s):  
Di Liu ◽  
Xiaoying Liang ◽  
Hai Chen ◽  
Hang Zhang ◽  
Nanzhao Mao

As a tool that can effectively support ecosystem management, ecological risk assessment is closely related to the sustainable development of ecosystems and human well-being and has become an active area of research in ecology, geography and other disciplines. Taking Dujiashi Gully for the study of gully loess erosion, a comprehensive risk assessment system for identifying risk probability, sensitivity and impairment was established. The spatial distribution of comprehensive ecological risk was analyzed, the ecological risk management categories were simultaneously delineated based on the risk dominant factor and the risk management strategies were formulated in loess regions. The results were as follows: (1) the spatial differences in comprehensive ecological risk were significantly different in the research area. The regions with extremely high and high risk were mainly located in gully areas and secondary erosion gullies, which are in 28.02% of study area. The extremely low-risk areas covered 1/3 of the study area and were mainly distributed to the northwest and south of the study area, where hills are widely spaced. (2) The combined analysis of ecological risk and terrain found that the elevation decreased first and then rose but the comprehensive ecological risk increased first and then decreased from north to south. Comprehensive ecological risk and terrain generally showed an inverse relationship. (3) The study area was divided into four types of risk management categories. Risk monitoring zones, habitat recovery zones, monitoring and recovery zones and natural regulation zones encompass 14.84%, 12.44%, 26.47% and 46.25% of the study area, respectively. According to four types of risk management categories, different risk reduction measures were designed to improve regional sustainable development capacity. Risk identification and risk management categories based on comprehensive ecological risk model can design a sustainable development path for social ecosystem and local farmers and provide a method for sustainable development for similar gully landscapes.


2012 ◽  
Vol 06 (03) ◽  
pp. 270-279 ◽  
Author(s):  
Esra Uzer Celik ◽  
Necmi Gokay ◽  
Mustafa Ates

ABSTRACTObjectives: The aims of this study were to: (1) evaluate the caries risk in young adults using Cariogram and (2) compare the efficiency of Cariogram with the regression risk models created using the same variables in Cariogram by examining the actual caries progression over a 2-year period.Methods: The aims of this study were to: (1) evaluate the caries risk in young adults using Cariogram and (2) compare the efficiency of Cariogram with the regression risk models created using the same variables in Cariogram by examining the actual caries progression over a 2-year period.Results: Diet frequency, plaque amount and secretion rate were significantly associated with caries increment (P<.05). Cariogram and the regression risk models explained the caries formation at a higher rate than single-variables. However, the regression risk model developed by diet frequency, plaque amount and secretion rate explained the caries formation similar to Cariogram, while the other regression model developed by all variables used in Cariogram explained the caries formation at a higher rate than this computer program.Conclusions: Cariogram is effective and can be used for caries risk assessment instead of single variables; however, it is possible to deve


2020 ◽  
Author(s):  
Jeroen Aerts

&lt;p&gt;Despite billions of dollars of investments in disaster risk reduction (DRR), data over the period 1994- 2013 show natural disasters caused 1.35 million lives. Science respond with more timely and accurate information on the dynamics of risk and vulnerability of natural hazards, such as floods. This information is essential for designing and implementing effective climate change adaptation and DRR policies. However, how much do we really know about how the main agents in DRR (individuals, businesses, government, NGO) use this data? How do agents behave before, during, and after a disaster, since this can dramatically affect the impact and recovery time. Since existing risk assessment methods rarely include this critical &amp;#8216;behavioral adaptation&amp;#8217; factor, significant progress has been made in the scientific community to address human adaptation activities (development of flood protection, reservoir operations, land management practices) in physically based risk models.&lt;/p&gt;&lt;p&gt;This presentation gives an historic overview of the most important developments in DRR science for flood risk. Traditional risk methods integrate vulnerability and adaptation using a &amp;#8216;top- down&amp;#8217; scenario approach, where climate change, socio economic trends and adaptation are treated as external forcing to a physically based risk model (e.g. hydrological or storm surge model). Vulnerability research has made significant steps in identifying the relevant vulnerability indicators, but has not yet provided the necessary tools to dynamically integrate vulnerability in flood risk models.&lt;/p&gt;&lt;p&gt;However, recent research show novel methods to integrate human adaptive behavior with flood risk models. By integrating behavioral adaptation dynamics in Agent Based Risk Models, may lead to a more realistic characterization of the risks and improved assessment of the effectiveness of risk management strategies and investments. With these improved methods, it is also shown that in the coming decades, human behavior is an important driver to flood risk projections as compared to other drivers, such as climate change. This presentation shows how these recent innovations for flood risk assessment provides novel insight for flood risk management policies.&lt;/p&gt;


Sign in / Sign up

Export Citation Format

Share Document