Reliability-Based Corrosion Management: The Impact of Maintenance and Implications for the Time to Next Inspection

Author(s):  
Mark Stephens ◽  
Maher Nessim ◽  
Albert van Roodselaar

Quantitative analysis based on structural reliability methods is ideally suited to managing corrosion and cracking damage in pipelines as identified through in-line inspection. An ongoing industry-sponsored initiative has laid out a reliability-based process that is intended to form the basis for an industry-accepted approach to assessing and managing pipeline integrity with respect to these damage mechanisms, with an initial focus on metal-loss corrosion. The process combines appropriate failure prediction models, in-line inspection data, the physical and operational characteristics of the pipeline, and corrosion growth rate projections, within a probabilistic analysis framework, to estimate the likelihood of corrosion failure as a function of time. It also provides the means to assess the beneficial impact of selective and staged defect remediation and to evaluate candidate remediation strategies to determine the most cost-effective approach. This paper summarizes the reliability-based assessment and integrity management process. It also illustrates how the results provided can be used to determine the most cost-effective maintenance strategy in terms of the number of features to be remediated and the preferred time to next inspection.

Author(s):  
Mark Stephens ◽  
Maher Nessim

Quantitative analysis approaches based on structural reliability methods are gaining wider acceptance as a basis for assessing pipeline integrity and these methods are ideally suited to managing metal loss corrosion damage as identified through in-line inspection. The essence of this approach is to combine deterministic failure prediction models with in-line inspection data, the physical and operational characteristics of the pipeline, corrosion growth rate projections, and the uncertainties inherent in this information, to estimate the probability of corrosion failure as a function of time. The probability estimates so obtained provide the basis for informed decisions on which defects to repair, when to repair them and when to re-inspect. While much has been written in recent years on these types of analyses, the authors are not aware of any published methods that address all of the factors that can significantly influence the probability estimates obtained from such an analysis. Of particular importance in this context are the uncertainties associated with the reported defect data, the uncertainties associated with the models used to predict failure from this defect data, and the approach used to discriminate between failure by leak and failure by burst. The correct discrimination of failure mode is important because tolerable failure probabilities should depend on the mode of failure, with lower limits being required for burst failures because the consequences of failure are typically orders of magnitude more severe than for leaks. This paper provides an overview of a probabilistic approach to corrosion defect management that addresses the key sources of uncertainty and discriminates between failure modes. This approach can be used to assess corrosion integrity based on in-line inspection data, schedule defect repairs and provide guidance in establishing re-inspection intervals.


Author(s):  
Mark Stephens ◽  
Albert van Roodselaar

The pipeline industry is moving to embrace more quantitative analysis methods for assessing pipeline integrity and demonstrating the benefits of integrity maintenance programs. Analysis based on structural reliability concepts is ideally suited to this purpose. In the context of corrosion management, the essence of this approach is to combine appropriate failure prediction models, in-line inspection data, the physical and operational characteristics of the pipeline, and corrosion growth rate projections, within a probabilistic analysis framework, to estimate the likelihood of corrosion failure as a function of time. A key element in this analysis approach is explicit consideration of all significant forms of uncertainty, including the uncertainties inherent in the data obtained from in-line inspection. This paper provides an overview of an ongoing research project, sponsored by the Pipeline Research Council International (PRCI), which is developing a reliability-based process that will form the basis for an industry-accepted approach to assessing and managing pipeline integrity with respect to corrosion. It also discusses the sources of uncertainty inherent in the in-line inspection process and their significance in the context of corrosion reliability analysis.


Author(s):  
Anna C. Thornton

Abstract Quality has been a rallying call in the design and manufacturing world for the last two decades. One way to improve quality is to reduce the impact of manufacturing variation. Variation risk mitigation is challenging especially when a product has multiple quality characteristics and complex production and assembly. It is common wisdom that companies should identify and mitigate the risk associated with variation throughout the design process. As yield problems are identified, they should be mitigated using the most cost effective approach. One approach to variation risk mitigation is variation reduction (VR). VR targets reduction of variation introduced by existing manufacturing processes using tools such as Design of Experiments (DOE) and robust design. Many companies have specialized groups that specialize in these methods. VR teams have the role of improving manufacturing performance; however, these teams are limited in their resources. In addition, no tools exist to quantitatively determine where a VR team’s efforts are most effectively deployed. This paper provides a mathematical and optimization model to best allocate VR resources in a complex product.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Stefano Perni ◽  
Polina Prokopovich

AbstractDespite the well-established dependence of cartilage mechanical properties on the frequency of the applied load, most research in the field is carried out in either load-free or constant load conditions because of the complexity of the equipment required for the determination of time-dependent properties. These simpler analyses provide a limited representation of cartilage properties thus greatly reducing the impact of the information gathered hindering the understanding of the mechanisms involved in this tissue replacement, development and pathology. More complex techniques could represent better investigative methods, but their uptake in cartilage research is limited by the highly specialised training required and cost of the equipment. There is, therefore, a clear need for alternative experimental approaches to cartilage testing to be deployed in research and clinical settings using more user-friendly and financial accessible devices. Frequency dependent material properties can be determined through rheometry that is an easy to use requiring a relatively inexpensive device; we present how a commercial rheometer can be adapted to determine the viscoelastic properties of articular cartilage. Frequency-sweep tests were run at various applied normal loads on immature, mature and trypsinased (as model of osteoarthritis) cartilage samples to determine the dynamic shear moduli (G*, G′ G″) of the tissues. Moduli increased with increasing frequency and applied load; mature cartilage had generally the highest moduli and GAG depleted samples the lowest. Hydraulic permeability (KH) was estimated from the rheological data and decreased with applied load; GAG depleted cartilage exhibited higher hydraulic permeability than either immature or mature tissues. The rheometer-based methodology developed was validated by the close comparison of the rheometer-obtained cartilage characteristics (G*, G′, G″, KH) with results obtained with more complex testing techniques available in literature. Rheometry is relatively simpler and does not require highly capital intensive machinery and staff training is more accessible; thus the use of a rheometer would represent a cost-effective approach for the determination of frequency-dependent properties of cartilage for more comprehensive and impactful results for both healthcare professional and R&D.


2021 ◽  
Author(s):  
Ioanna Tselka ◽  
Isidora Isis Demertzi ◽  
George P. Petropoulos

<p>Covid-19 pandemic has led to severe consequences to humanity worldwide. Yet, to our knowledge, little scientific evidence is available exploring the impact of the pandemic on criminality. Thus, it is imperative to examine their relationships spatially to obtain a better understanding of societal characteristics during the pandemic.</p><p>This study aims at demonstrating the use of geoinformation in analyzing the spatial patterns between crime properties and Covid-19 spread using as a case study New York City, USA, one of the largest metropolitan cities of the world. To address our objectives, geostatistical analysis and data visualization methods have been implemented in real-world crime data acquired from a web-GIS platform. Our analysis concerns two equal time periods before and after the lockdown implementation.</p><p>Results revealed some very interesting patterns spatially between the examined parameters and societal characteristics existing in the study region. The methodological framework presented underlined the added value of geoinformation as a robust and cost-effective approach in examining the impact of the pandemic to the society.</p><p> </p><p><strong>Keywords:</strong> Covid-19, pandemic, crime rates, geoinformation, New York</p>


Author(s):  
Sherif Hassanien ◽  
Len Leblanc ◽  
Javier Cuervo ◽  
Karmun Cheng

Reliability engineering science is a mature discipline that has been used extensively in industries such as aviation, nuclear energy, automobiles, and structures. The application of reliability principles (especially structural reliability) in oil and gas transmission pipelines is still an active area of development. The advent of high resolution in-line inspections tools (ILI) facilitates a formal application/utilization of reliability methods in pipeline integrity in order to safely manage deformation, metal loss, and crack threats. At the same time, the massive amount of ILI data, their associated uncertainties, and the availability/accuracy of failure prediction models present a challenge for operators to effectively implement the use of reliability analysis to check the safety of integrity programs within available timeframes. On the other hand, approximate reliability techniques may affect the analysis in terms of both accuracy and precision. In this paper, a Pipeline Integrity Reliability Analysis (PIRA) approach is presented where the sophistication of the reliability analysis is staged into three levels: PIRA levels I, II and III. The three PIRA levels correspond to different representations of integrity uncertainties, uses of available validated/calibrated data, uses of statistical models for operating pressure and resistance random variables, implementation of reliability methods, and consideration of failure modes. Moreover, PIRA levels allow for improved integration of reliability analysis with the existing timelines/stages of traditional integrity programs, such that integrity data are updated as the integrity program progresses. The proposed integrity reliability approach allows for the delivery of safety checks leveraging all types of information available at any given point in time. In addition, the approach provides a full understanding of the strengths and weaknesses of each PIRA level. Pipeline corrosion case studies are provided herein to illustrate how the PIRA Levels can be applied to integrity programs.


2019 ◽  
Vol 40 (2) ◽  
pp. 129-148 ◽  
Author(s):  
Gentile Francesco Ficetola ◽  
Raoul Manenti ◽  
Pierre Taberlet

Abstract In the last decade, eDNA and metabarcoding have opened new avenues to biodiversity studies; amphibians and reptiles are animals for which these new approaches have allowed great leaps forward. Here we review different approaches through which eDNA can be used to study amphibians, reptiles and many more organisms. eDNA is often used to evaluate the presence of target species in freshwaters; it has been particularly useful to detect invasive alien amphibians and secretive or rare species, but the metabarcoding approach is increasingly used as a cost-effective approach to assess entire communities. There is growing evidence that eDNA can be also useful to study terrestrial organisms, to evaluate the relative abundance of species, and to detect reptiles. Metabarcoding has also revolutionized studies on the microbiome associated to skin and gut, clarifying the complex relationships between pathogens, microbial diversity and environmental variation. We also identify additional aspects that have received limited attention so far, but can greatly benefit from innovative applications of eDNA, such as the study of past biodiversity, diet analysis and the reconstruction of trophic interactions. Despite impressive potential, eDNA and metabarcoding also bear substantial technical and analytical complexity; we identify laboratory and analytical strategies that can improve the robustness of results. Collaboration among field biologists, ecologist, molecular biologists, and bioinformaticians is allowing fast technical and conceptual advances; multidisciplinary studies involving eDNA analyses will greatly improve our understanding of the complex relationships between organisms, and our effectiveness in assessing and preventing the impact of human activities.


2017 ◽  
Vol 32 (5) ◽  
pp. 1273-1279 ◽  
Author(s):  
Amy Sanders ◽  
Cendrine Robinson ◽  
Shani C. Taylor ◽  
Samantha D. Post ◽  
Jeffrey Goldfarb ◽  
...  

Purpose: To describe the impact of the National Cancer Institute’s promotion of its youth smoking cessation program, Smokefree Teen (SFT). Design: We provide a description of campaign strategies and outcomes as a means to engage a teen audience in cessation resources using a cost-effective approach. Setting: The campaign occurred nationally, using traditional (TV and radio), online, and social media outreach. Participants: Ads targeted adolescent smokers (aged 14-17). The baseline population was 42 586 and increased to 464 357 during the campaign. Measures: Metrics used to assess outcomes include (1) visits to SFT website from traditional and online ads, (2) cost to get an online ad clicked (cost-per-click), and (3) SmokefreeTXT program enrollments during the 8-week campaign period. Analysis: We conducted a quantitative performance review of all tactics. Results: The SFT campaign achieved an online ad click-through rate of 0.33%, exceeding industry averages of 0.15%. Overall, web traffic to teen.smokefree.gov increased by 980%, and the online cost-per-click for ads, including social media actions, was approximately $1 as compared with $107 for traditional ads. Additionally, the campaign increased the SmokefreeTXT program teen sign-ups by 1334%. Conclusion: The campaign increased engagement with evidence-informed cessation resources for teen smokers. Results show the potential of using multiple, online channels to help increase engagement with core resources.


2020 ◽  
Vol 41 (1) ◽  
pp. 89-99
Author(s):  
Aureliano Paolo Finch ◽  
John Brazier ◽  
Clara Mukuria

Background Generic preference-based measures (GPBMs) such as the EQ-5D are valid across many conditions, but in some cases, “bolting on” additional dimensions may improve validity. The selection of “bolt-ons” has been based on the psychometric impact of individual dimensions, but preferences provide another important way to select them. This study aims to test the potential of using pairwise choices to inform the selection of bolt-ons for the EQ-5D-5L. Methods General population preferences were collected using an online survey of 1040 UK residents. Three EQ-5D-5L health state pairs were selected based on pairs that had a 50:50 split in respondent preferences from a previous pairwise survey. Participants were presented with pairwise choices of EQ-5D-5L health states without and with bolt-ons of hearing, sleep, cognition, energy, and relationships, each added individually. Logistic models were used to assess the impact of bolt-ons, as well as bolt-ons at different severity levels, on the log odds of responders choosing between health states. Results Preferences varied according to the bolt-ons and their severity level (only levels 1, 3, and 5 were used). Additions of bolt-ons at level 1 generally resulted in nonstatistically significant differences while additions of bolt-ons at level 3 and level 5 produced a negative and statistically significant impact on preferences for the health state with the bolt-on. At level 5, hearing had the largest impact, followed by cognition, relationships, energy, and sleep. At level 3, cognition produced the largest impact, followed by hearing and sleep with similar impacts, energy, and relationships. This ordering offers information for bolt-on selection, with hearing and cognition appearing as the most important. The weight placed on the different health problems is not constant across severity levels between bolt-ons. Conclusions Pairwise choices provide a cost-effective approach of generating information on preferences to support bolt-on selection.


CJEM ◽  
2019 ◽  
Vol 21 (S1) ◽  
pp. S108
Author(s):  
Z. Siddiqi ◽  
E. Lang ◽  
D. Grigat ◽  
S. Vatanpour

Introduction: Iron Deficiency Anemia (IDA) is a common presentation to the emergency department (ED) and is often treated with red blood cell transfusions. Choosing Wisely and the American Association of Blood Banks released guidelines in 2016 outlining under what circumstances transfusions should be given for patients with IDA. Few well-powered studies have looked at the impact of these guidelines on transfusions in EDs. The goal of this study was to examine the number of RBC transfusions that were given in EDs in Calgary, Alberta from 2014-2018 and what proportion of these were potentially avoidable (PA). Methods: We analyzed 8651 IDA patient encounters from 2014–2018 at four centers in the Calgary Zone. A transfusion was considered PA if the patient's hemoglobin (hgb) was ≥70 g/L AND if the patient was hemodynamically stable. We performed descriptive statistics to assess the number of transfusions and the number of avoidable transfusions. We used chi-squared tests to determine if there were significant differences between site, time-period, hemoglobin level. Results: In total, 990 (11.4%) of the encounters received transfusions; 711 (71.8%) were indicated while 279 (28.1%) were PA. Out of the transfusions that were indicated, 230 (32.3%) were given to patients with a hgb <70 g/L and 481 (67.7%) were given to patients with a hgb >70 g/L but who were hemodynamically unstable. Out of the transfusions that were PA, the highest number were given to those in the 71-80 g/L hgb group (142) and the lowest number were given to those in the 110-130 g/L hgb group (9), a difference that was statistically significant (p < 0.001). The PA transfusion rates from 2014 to 2018 were 30.8%, 25.6%, 34.5%, 23.6%, 20.7% respectively, which was a statistically significant difference (p = 0.004). Conclusion: Our data suggest that the number of PA transfusions at the hospitals in the Calgary zone is comparable to the rates reported in the existing literature. In addition, the rate of PA transfusions has decreased since the release of the guidelines. A limitation of the present study was that it did not look at the number of units of red blood cells transfused and since many patients receive more than one unit, it is possible that the number of PA transfusions was underestimated. Nevertheless, we intend to use our results to create a safer and more cost-effective approach to managing IDA.


Sign in / Sign up

Export Citation Format

Share Document