Estimation of Corrosion Rates by Run Comparison: A Stochastic Scoring Methodology

Author(s):  
E´rika S. M. Nicoletti ◽  
Ricardo D. de Souza

Pipeline operators used to map and quantify corrosion damage along their aging pipeline systems by carrying out periodical in-line metal-loss inspections. Comparison with the data sets from subsequent runs of such inspections is one of the most reliable techniques to infer representative corrosion growth rates throughout the pipeline length, within the period between two inspections. Presently there are two distinct approaches to infer corrosion rates based on multiple in-line inspections: individual comparison of the detected defective areas (quantified by more than one inspection), and comparison between populations. The former usually requires a laborious matching process between the run-data sets, while the drawback of the latter is that it often fails to notice hot-spot areas. The object of this work is to present a new methodology which allows quick data comparison of two runs, while still maintaining local distinct characteristics of the corrosion process severity. There are three procedures that must be performed. Firstly, ILI metal-loss data set should be submitted to a filtering/adjustment process, taking into consideration the reporting threshold consistency; the possible existence of systematic bias and corrosion mechanisms similarity. Secondly, the average metal-loss growth rate between inspections should be determined based on the filtered populations. Thirdly, the defects reported by the latest inspection should have their corrosion growth rates individually determined as a function of the mean depth values of the whole population and in the defect neighborhood. The methodology allows quick and realistic damage-progression estimates, endeavoring to achieve more cost-effective and reliable strategies for the integrity management of aged corroded systems. Model robustness and general feasibility is demonstrated in a real case study.

Author(s):  
Jane Dawson ◽  
Lautaro Ganim

Corrosion is still one of the major threats to the integrity of onshore and offshore pipelines. Realistic corrosion growth rates are essential inputs to safe and effective pipeline integrity management decisions. For example, corrosion rates are needed to predict pipeline reliability as a function of time, to identify the need for and timing of field investigations and/or repairs and to determine optimum re-inspection intervals to name just a few applications. The consequences associated with using wrong corrosion growth rates range from the inefficient use of resources (time, people and money) on unnecessary repairs and/or inspections to unexpected pipeline releases. The identification of where corrosion is active on a pipeline and how fast it is growing is a complex process which is understood in the general sense but is highly variable. Corrosion is therefore difficult to predict due to the very localised nature of its behaviour and the many parameters that influence the corrosion reaction. Running an in-line inspection (ILI) tool in a pipeline identifies the internal and/or external corrosion located along the full length of the pipeline. The ILI inspection also determines the depth, length and width measurements for each corrosion site and for the overall feature. The use of repeat ILI data to match and compare metal loss sites in order to estimate the corrosion growth rates at individual defects along a pipeline is a well-used and established practice in the industry. The use of such corrosion rates to make predictions of the future integrity of a pipeline started in earnest approximately 5 to 10 years ago and over that time considerable experience has been gained. Now that we are starting to collect 3, 4 or even 5 or more ILI data sets for the same pipelines we are able to test and validate our earlier ILI based growth rate predictions versus what actually occurred in the pipeline over time. With the benefit of this hindsight, the methodologies employed for evaluating and applying ILI based corrosion rates can be further improved and refined to give more accurate predictions of the future pipeline condition, the response schedule and for setting the timing of re-inspections. This paper shares the experience gained and the improvements that can be made to the determination of corrosion rates and application of these rates in a pipeline integrity assessment. These topics are illustrated and investigated via the use of case studies on real ILI data sets.


2017 ◽  
Vol 14 (4) ◽  
pp. 172988141770907 ◽  
Author(s):  
Hanbo Wu ◽  
Xin Ma ◽  
Zhimeng Zhang ◽  
Haibo Wang ◽  
Yibin Li

Human daily activity recognition has been a hot spot in the field of computer vision for many decades. Despite best efforts, activity recognition in naturally uncontrolled settings remains a challenging problem. Recently, by being able to perceive depth and visual cues simultaneously, RGB-D cameras greatly boost the performance of activity recognition. However, due to some practical difficulties, the publicly available RGB-D data sets are not sufficiently large for benchmarking when considering the diversity of their activities, subjects, and background. This severely affects the applicability of complicated learning-based recognition approaches. To address the issue, this article provides a large-scale RGB-D activity data set by merging five public RGB-D data sets that differ from each other on many aspects such as length of actions, nationality of subjects, or camera angles. This data set comprises 4528 samples depicting 7 action categories (up to 46 subcategories) performed by 74 subjects. To verify the challengeness of the data set, three feature representation methods are evaluated, which are depth motion maps, spatiotemporal depth cuboid similarity feature, and curvature space scale. Results show that the merged large-scale data set is more realistic and challenging and therefore more suitable for benchmarking.


Author(s):  
James Simek ◽  
Jed Ludlow ◽  
Phil Tisovec

InLine Inspection (ILI) tools using the magnetic flux leakage (MFL) technique are the most common type used for performing metal loss surveys worldwide. Based upon the very robust and proven magnetic flux leakage technique, these tools have been shown to operate reliably in the extremely harsh environments of transmission pipelines. In addition to metal loss, MFL tools are capable of identifying a broad range of pipeline features. Most MFL surveys to date have used tools employing axially oriented magnetizers, capable of detecting and quantifying many categories of volumetric metal loss features. For certain classes of axially oriented features, MFL tools using axially oriented fields have encountered difficulty in detection and subsequent quantification. To address features in these categories, tools employing circumferential or transversely oriented fields have been designed and placed into service, enabling enhanced detection and sizing for axially oriented features. In most cases, multiple surveys are required, as current tools do not incorporate the ability to collect both data sets concurrently. Applying the magnetic field in an oblique direction will enable detection of axially oriented features and may be used simultaneously with an axially oriented tool. Referencing previous research in adapting circumferential or transverse designs for inline service, the concept of an oblique field magnetizer will be presented. Models developed demonstrating the technique are discussed, shown with experimental data supporting the concept. Efforts involved in the implementation of an oblique magnetizer, including magnetic models for field profiles used to determine magnetizer configurations and sensor locations are presented. Experimental results are provided detailing the response of the system to a full range of metal loss features, supplementing modeling in an effort to determine the effects of variables introduced by magnetic property and velocity induced differences. Included in the experimental data results are extremely narrow axially oriented features, many of which are not detected or identified within the axial data set. Experimental and field verification results for detection accuracies will be described in comparison to an axial field tool.


2011 ◽  
Vol 4 (5) ◽  
pp. 775-793 ◽  
Author(s):  
S. M. Illingworth ◽  
J. J. Remedios ◽  
H. Boesch ◽  
S.-P. Ho ◽  
D. P. Edwards ◽  
...  

Abstract. Observations of atmospheric carbon monoxide (CO) can only be made on continental and global scales by remote sensing instruments situated in space. One such instrument is the Infrared Atmospheric Sounding Interferometer (IASI), producing spectrally resolved, top-of-atmosphere radiance measurements from which CO vertical layers and total columns can be retrieved. This paper presents a technique for intercomparisons of satellite data with low vertical resolution. The example in the paper also generates the first intercomparison between an IASI CO data set, in this case that produced by the University of Leicester IASI Retrieval Scheme (ULIRS), and the V3 and V4 operationally retrieved CO products from the Measurements Of Pollution In The Troposphere (MOPITT) instrument. The comparison is performed for a localised region of Africa, primarily for an ocean day-time configuration, in order to develop the technique for instrument intercomparison in a region with well defined a priori. By comparing both the standard data and a special version of MOPITT data retrieved using the ULIRS a priori for CO, it is shown that standard intercomparisons of CO are strongly affected by the differing a priori data of the retrievals, and by the differing sensitivities of the two instruments. In particular, the differing a priori profiles for MOPITT V3 and V4 data result in systematic retrieved profile changes as expected. An application of averaging kernels is used to derive a difference quantity which is much less affected by smoothing error, and hence more sensitive to systematic error. These conclusions are confirmed by simulations with model profiles for the same region. This technique is used to show that for the data that has been processed the systematic bias between MOPITT V4 and ULIRS IASI data, at MOPITT vertical resolution, is less than 7 % for the comparison data set, and on average appears to be less than 4 %. The results of this study indicate that intercomparisons of satellite data sets with low vertical resolution should ideally be performed with: retrievals using a common a priori appropriate to the geographic region studied; the application of averaging kernels to compute difference quantities with reduced a priori influence; and a comparison with simulated differences using model profiles for the target gas in the region.


2021 ◽  
Vol 13 (5) ◽  
pp. 2407-2436
Author(s):  
Olivier Bock ◽  
Pierre Bosser ◽  
Cyrille Flamant ◽  
Erik Doerflinger ◽  
Friedhelm Jansen ◽  
...  

Abstract. Ground-based Global Navigation Satellite System (GNSS) measurements from nearly 50 stations distributed over the Caribbean arc have been analysed for the period 1 January–29 February 2020 in the framework of the EUREC4A (Elucidate the Couplings Between Clouds, Convection and Circulation) field campaign. The aim of this effort is to deliver high-quality integrated water vapour (IWV) estimates to investigate the moisture environment of mesoscale cloud patterns in the trade winds and their feedback on the large-scale circulation and energy budget. This paper describes the GNSS data processing procedures and assesses the quality of the GNSS IWV retrievals from four operational streams and one reprocessed research stream which is the main data set used for offline scientific applications. The uncertainties associated with each of the data sets, including the zenith tropospheric delay (ZTD)-to-IWV conversion methods and auxiliary data, are quantified and discussed. The IWV estimates from the reprocessed data set are compared to the Vaisala RS41 radiosonde measurements operated from the Barbados Cloud Observatory (BCO) and to the measurements from the operational radiosonde station at Grantley Adams International Airport (GAIA), Bridgetown, Barbados. A significant dry bias is found in the GAIA humidity observations with respect to the BCO sondes (−2.9 kg m−2) and the GNSS results (−1.2 kg m−2). A systematic bias between the BCO sondes and GNSS is also observed (1.7 kg m−2), where the Vaisala RS41 measurements are moister than the GNSS retrievals. The IWV estimates from a collocated microwave radiometer agree with the BCO soundings after an instrumental update on 27 January, while they exhibit a dry bias compared to the soundings and to GNSS before that date. IWV estimates from the ECMWF fifth-generation reanalysis (ERA5) are overall close to the GAIA observations, probably due to the assimilation of these observations in the reanalysis. However, during several events where strong peaks in IWV occurred, ERA5 is shown to significantly underestimate the GNSS-derived IWV peaks. Two successive peaks are observed on 22 January and 23–24 January which were associated with heavy rain and deep moist layers extending from the surface up to altitudes of 3.5 and 5 km, respectively. ERA5 significantly underestimates the moisture content in the upper part of these layers. The origins of the various moisture biases are currently being investigated. We classified the cloud organization for five representative GNSS stations across the Caribbean arc using visible satellite images. A statistically significant link was found between the cloud patterns and the local IWV observations from the GNSS sites as well as the larger-scale IWV patterns from the ECMWF ERA5 reanalysis. The reprocessed ZTD and IWV data sets from 49 GNSS stations used in this study are available from the French data and service centre for atmosphere (AERIS) (https://doi.org/10.25326/79; Bock, 2020b).


IUCrJ ◽  
2017 ◽  
Vol 4 (5) ◽  
pp. 626-638 ◽  
Author(s):  
James M. Parkhurst ◽  
Andrea Thorn ◽  
Melanie Vollmar ◽  
Graeme Winter ◽  
David G. Waterman ◽  
...  

An algorithm for modelling the background for each Bragg reflection in a series of X-ray diffraction images containing Debye–Scherrer diffraction from ice in the sample is presented. The method involves the use of a global background model which is generated from the complete X-ray diffraction data set. Fitting of this model to the background pixels is then performed for each reflection independently. The algorithm uses a static background model that does not vary over the course of the scan. The greatest improvement can be expected for data where ice rings are present throughout the data set and the local background shape at the size of a spot on the detector does not exhibit large time-dependent variation. However, the algorithm has been applied to data sets whose background showed large pixel variations (variance/mean > 2) and has been shown to improve the results of processing for these data sets. It is shown that the use of a simple flat-background model as in traditional integration programs causes systematic bias in the background determination at ice-ring resolutions, resulting in an overestimation of reflection intensities at the peaks of the ice rings and an underestimation of reflection intensities either side of the ice ring. The new global background-model algorithm presented here corrects for this bias, resulting in a noticeable improvement inRfactors following refinement.


Author(s):  
Yanping Li ◽  
Gordon Fredine ◽  
Yvan Hubert ◽  
Sherif Hassanien

With the increased number of In-Line Inspections (ILI) on pipelines, it is important to evaluate ILI tool performance to support making rational integrity decisions. API 1163 “In-Line inspection systems qualification” outlines an ILI data set validation process which is mainly based on comparing ILI data with field measurements. The concept of comparing ILI results with previous ILI data is briefly mentioned in API 1163 Level 1 validation and discussed in detail in CEPA metal Loss ILI tool validation guidance document. However, a different approach from API 1163 is recommended in the CEPA document. Although the methodologies of validating an ILI performance are available, other than determining whether an inspection data set is acceptable, the role of ILI validation in integrity management decision making is not well defined in these documents. Enbridge has reviewed API 1163 and CEPA methodologies and developed a process to validate metal loss ILI results. This process uses API 1163 as tool performance acceptance criteria while CEPA method is used to provide additional information such as depth over-call or under-call. The process captures the main concepts of both API 1163 and CEPA methodologies. It adds a new dimension to the validation procedure by evaluating different corrosion morphologies, depth ranges, and proximity to long seam and girth weld. The process also checks ILI results against previous ILI data sets and combines the results of several inspections. The validation results of one inspection provide information on whether the inspection data set is acceptable based on the ILI specification. This information is useful for excavation selection. Tool performance review based on several inspection data sets identifies the strength and weakness of an inspection tool; this information will be used to ensure the tool selection is appropriate for the expected feature types on the pipeline. Applications of the validation process are provided to demonstrate how the process can aid in making integrity decisions and managing metal loss threats.


2017 ◽  
Vol 20 (15) ◽  
pp. 2649-2659 ◽  
Author(s):  
Madeleine IG Daepp ◽  
Jennifer Black

AbstractObjectiveThe present study assessed systematic bias and the effects of data set error on the validity of food environment measures in two municipal and two commercial secondary data sets.DesignSensitivity, positive predictive value (PPV) and concordance were calculated by comparing two municipal and two commercial secondary data sets with ground-truthed data collected within 800 m buffers surrounding twenty-six schools. Logistic regression examined associations of sensitivity and PPV with commercial density and neighbourhood socio-economic deprivation. Kendall’sτestimated correlations between density and proximity of food outlets near schools constructed with secondary data setsv. ground-truthed data.SettingVancouver, Canada.SubjectsFood retailers located within 800 m of twenty-six schoolsResultsAll data sets scored relatively poorly across validity measures, although, overall, municipal data sets had higher levels of validity than did commercial data sets. Food outlets were more likely to be missing from municipal health inspections lists and commercial data sets in neighbourhoods with higher commercial density. Still, both proximity and density measures constructed from all secondary data sets were highly correlated (Kendall’sτ>0·70) with measures constructed from ground-truthed data.ConclusionsDespite relatively low levels of validity in all secondary data sets examined, food environment measures constructed from secondary data sets remained highly correlated with ground-truthed data. Findings suggest that secondary data sets can be used to measure the food environment, although estimates should be treated with caution in areas with high commercial density.


2015 ◽  
Author(s):  
Marek L Borowiec ◽  
Ernest K Lee ◽  
Joanna C Chiu ◽  
David C Plachetzki

Transcriptome-enabled phylogenetic analyses have dramatically improved our understanding of metazoan phylogeny in recent years, although several important questions remain. The branching order near the base of the tree is one such outstanding issue. To address this question we assemble a novel data set comprised of 1,080 orthologous loci derived from 36 publicly available genomes and dissect the phylogenetic signal present in each individual partition. The size of this data set allows for a closer look at the potential biases and sources of non-phylogenetic signal. We assessed a range of measures for each data partition including information content, saturation, rate of evolution, long-branch score, and taxon occupancy and explored how each of these characteristics impacts phylogeny estimation. We then used these data to prepare a reduced set of partitions that fit an optimal set of criteria and are amenable to the most appropriate and computationally intensive analyses using site-heterogeneous models of sequence evolution. We also employed several strategies to examine the potential for long-branch attraction to bias our inferences. All of our analyses support Ctenophora as the sister lineage to other Metazoa, although support for this relationship varies among analyses. We find no support for the traditional view uniting the ctenophores and Cnidaria (jellies, anemones, corals, and kin). We also examine phylogenetic placement of myriapods (centipedes and millipedes) and find it more sensitive to the type of analysis and data used. Our study provides a workflow for minimizing systematic bias in whole genome-based phylogenetic analyses.


2018 ◽  
Vol 154 (2) ◽  
pp. 149-155
Author(s):  
Michael Archer

1. Yearly records of worker Vespula germanica (Fabricius) taken in suction traps at Silwood Park (28 years) and at Rothamsted Research (39 years) are examined. 2. Using the autocorrelation function (ACF), a significant negative 1-year lag followed by a lesser non-significant positive 2-year lag was found in all, or parts of, each data set, indicating an underlying population dynamic of a 2-year cycle with a damped waveform. 3. The minimum number of years before the 2-year cycle with damped waveform was shown varied between 17 and 26, or was not found in some data sets. 4. Ecological factors delaying or preventing the occurrence of the 2-year cycle are considered.


Sign in / Sign up

Export Citation Format

Share Document