Automated Error Identification During Nondestructive Testing of Pipelines for Strength

Author(s):  
Jeffrey A. Kornuta ◽  
Solver I. Thorsson ◽  
Jonathan Gibbs ◽  
Peter Veloo ◽  
Troy Rovella

Abstract The United States Pipeline and Hazardous Materials Safety Administration (PHMSA) recently revised the federal rules governing natural gas transport. PHMSA added a new section on the verification of pipeline material properties for pipeline assets with insufficient or incomplete records. This section permits the use of nondestructive examination (NDE) technologies to estimate material properties, which include yield strength (YS) and ultimate tensile strength (UTS), if several conditions are satisfied. These include that NDE measurement accuracy and uncertainty be conservatively accounted for, that the NDE technology be validated by experts, and that proper calibration procedures be implemented. One such NDE technology is Instrumented Indentation Testing (IIT), which can be used to estimate YS and UTS. Precise quantification of any NDE technology’s precision and accuracy requires consistent identification of test errors: if an error occurs during a measurement such that the data should be excluded from subsequent analyses, analysts need to be alerted to the data characteristics prior to including these results. These testing errors are distinct from the inherent measurement uncertainty due to both random error and systematic error. Any NDE measurement will contain some degree of uncertainty; however, faulty measurements exhibiting clearly identifiable errors must be excluded from subsequent analyses to maintain the integrity of the data set. Accordingly, this paper extends Pacific Gas and Electric’s (PG&E’s) previously reported efforts on IIT uncertainty quantification by presenting observations of a specific type of IIT error related to tool fixturing that has occurred during in-situ testing and describing the characteristics of how this error was exhibited in the test data. Once this test error was clearly identified, isolated, and was found repeatable; pre-processing algorithms were adapted to detect and alert NDE technicians to this error during testing, ultimately evolving NDE work procedures. This paper discusses this process from the initial recognition of a test error, to the adaptation of appropriate detection algorithms, and then finally to resulting revisions in operator procedures. Ultimately, these modifications have improved validation data quality and reduced the error rate of IIT measurements collected in the field.

2008 ◽  
Vol 29 (3) ◽  
pp. 203-211 ◽  
Author(s):  
Conan MacDougall ◽  
Ronald E. Polk

Objective.To describe variability in rates of antibacterial use in a large sample of US hospitals and to create risk-adjusted models for interhospital comparison.Methods.We retrospectively surveyed the use of 87 antibacterial agents on the basis of electronic claims data from 130 medical-surgical hospitals in the United States for the period August 2002 to July 2003; these records represented 1,798,084 adult inpatients. Hospitals were assigned randomly to the derivation data set (65 hospitals) or the validation data set (65 hospitals). Multivariable models predicting rates of antibacterial use were created using the derivation data set. These models were then used to predict rates of antibacterial use in the validation data set, which was compared with observed rates of antibacterial use. Rates of antibacterial use was measured in days of therapy per 1,000 patient-days.Results.Across the surveyed hospitals, a mean of 59.3% of patients received at least 1 dose of an antimicrobial agent during hospitalization (range for individual hospitals, 44.4%-73.6%). The mean total rate of antibacterial use was 789.8 days of therapy per 1,000 patient-days (range, 454.4-1,153.4). The best model for the total rate of antibacterial use explained 31% of the variance in rates of antibacterial use and included the number of hospital beds, the number of days in the intensive care unit per 1,000 patient-days, the number of surgeries per 1,000 discharges, and the number of cases of pneumonia, bacteremia, and urinary tract infection per 1,000 discharges. Five hospitals in the validation data set were identified as having outlier rates on the basis of observed antibacterial use greater than the upper bound of the 90% prediction interval for predicted antibacterial use in that hospital.Conclusion.Most adult inpatients receive antimicrobial agents during their hospitalization, but there is substantial variability between hospitals in the volume of antibacterials used. Risk-adjusted models can explain a significant proportion of this variation and allow for comparisons between hospitals for benchmarking purposes.


2016 ◽  
Author(s):  
Bojan Sič ◽  
Laaziz El Amraoui ◽  
Andrea Piacentini ◽  
Virginie Marécal ◽  
Emanuele Emili ◽  
...  

Abstract. In this study, we describe the development of the aerosol optical depth (AOD) assimilation module in the chemistry-transport model (CTM) MOCAGE (Modèle de Chimie Atmosphérique à Grande Echelle). Our goal is to assimilate the 2D column AOD data from the National Aeronautics and Space Administration (NASA) Moderate-resolution Imaging Spectroradiometer (MODIS) instrument and to estimate improvements in a 3D CTM assimilation run compared to a direct model run. Our assimilation system uses 3D-FGAT (First Guess at Appropriate Time) as an assimilation method and the total 3D aerosol concentration as a control variable. In order to have an extensive validation data set, we set our experiment in the northern summer of 2012 when the pre-ChArMEx (CHemistry and AeRosol MEditerranean EXperiment) field campaign TRAQA (TRAnsport à longue distance et Qualité de l’Air dans le bassin méditerranéen) took place in the western Mediterranean basin. The assimilated model run is evaluated independently against a range of aerosol properties (2D and 3D) measured by in-situ instruments (the TRAQA size-resolved balloon and aircraft measurements), the satellite Spinning Enhanced Visible and InfraRed Imager (SEVIRI) instrument and ground-based instruments from the Aerosol Robotic Network (AERONET) network. The evaluation demonstrates that the AOD assimilation greatly improves aerosol representation in the model. For example, the comparison of the direct and the assimilated model run with AERONET data shows that the assimilation reduced the bias in the AOD (from 0.050 to 0.006) and increased the correlation (from 0.74 to 0.88). When compared to the 3D concentration data obtained by the in-situ aircraft and balloon measurements, the assimilation consistently improves the model output. The best results as expected occur when the shape of the vertical profile is correctly simulated by the direct model. We also examine how the assimilation can influence the modelled aerosol vertical distribution. The results show that a 2D continuous AOD assimilation can improve the 3D vertical profile, as a result of differential horizontal transport of aerosols in the model.


2019 ◽  
Vol 51 (1) ◽  
pp. 17-23
Author(s):  
Joyce Z. Qian ◽  
Mara A. McAdams-DeMarco ◽  
Derek Ng ◽  
Bryan Lau

Background: Choice of vascular access for older hemodialysis patients presents a special challenge since the rate of arteriovenous fistula (AVF) primary failure is high. The Lok’s risk equation predicting AVF primary failure has achieved good prediction accuracy and holds great potential for clinical use, but it has not been validated in the United States older hemodialysis patients. Methods: We assembled a validation data set of 14,892 patients aged 67 years and older who initiated hemodialysis with a central venous catheter between July 1, 2010, and June 30, 2012, and had a subsequent, incident AVF placement from the United States Renal Data System. We examined the external validity of Lok’s model by applying it to this validation data set. The discriminatory accuracy and calibration were evaluated by the concordance index (C-statistics) and calibration plot, respectively. Results: The observed frequency of AVF primary failure varied from 0.45 to 0.53 in hemodialysis patients in the validation data set. The predicted probabilities of AVF primary failure calculated by using the Lok’s risk equation ranged from 0.08 to 0.61, and 77.8, 40.5, and 51.7% of patients were categorized as having high, intermediate, and low risk of AVF primary failure, respectively. The C-statistics of the Lok’s risk equation in the validation data set was 0.53 (95% CI 0.52–0.54). The predicted probabilities of AVF primary failure corresponded poorly with the observed proportions in the calibration plot. Conclusions: When externally applied to a cohort of U.S. older hemodialysis patients, the Lok’s risk equation exhibited poor discrimination and calibration accuracy. It is invalid to use it to predict AVF primary failure. A more complex model with strong predictors is expected to better serve clinical determination for AVF placement in this population.


Author(s):  
Richard W. Johnson

The United States Department of Energy is promoting the resurgence of nuclear power in the U. S. for both electrical power generation and production of process heat required for industrial processes such as the manufacture of hydrogen for use as a fuel in automobiles. The DOE project is called the next generation nuclear plant (NGNP) and is based on a Generation IV reactor concept called the very high temperature reactor (VHTR), which will use helium as the coolant at temperatures ranging from 450 °C to perhaps 1000 °C. While computational fluid dynamics (CFD) has not been used for past safety analysis for nuclear reactors in the U. S., it is being considered for safety analysis for existing and future reactors. It is fully recognized that CFD simulation codes will have to be validated for flow physics reasonably close to actual fluid dynamic conditions expected in normal and accident operational situations. To this end, experimental data have been obtained in a scaled model of a narrow slice of the lower plenum of a prismatic VHTR. The present article presents new results of CFD examinations of these data to explore potential issues with the geometry, the initial conditions, the flow dynamics and the data needed to fully specify the inlet and boundary conditions; results for several turbulence models are examined. Issues are addressed and recommendations about the data are made.


2014 ◽  
Vol 11 (8) ◽  
pp. 12255-12294 ◽  
Author(s):  
G. Parard ◽  
A. A. Charantonis ◽  
A. Rutgerson

Abstract. Studies of coastal seas in Europe have brought forth the high variability in the CO2 system. This high variability, generated by the complex mechanisms driving the CO2 fluxes makes their accurate estimation an arduous task. This is more pronounced in the Baltic Sea, where the mechanisms driving the fluxes have not been as highly detailed as in the open oceans. In adition, the joint availability of in-situ measurements of CO2 and of sea-surface satellite data is limited in the area. In this paper, a combination of two existing methods (Self-Organizing-Maps and Multiple Linear regression) is used to estimate ocean surface pCO2 in the Baltic Sea from remotely sensed surface temperature, chlorophyll, coloured dissolved organic matter, net primary production and mixed layer depth. The outputs of this research have an horizontal resolution of 4 km, and cover the period from 1998 to 2011. The reconstructed pCO2 values over the validation data set have a correlation of 0.93 with the in-situ measurements, and a root mean square error is of 38 μatm. The removal of any of the satellite parameters degraded this reconstruction of the CO2 flux, and we chose therefore to complete any missing data through statistical imputation. The CO2 maps produced by this method also provide a confidence level of the reconstruction at each grid point. The results obtained are encouraging given the sparsity of available data and we expect to be able to produce even more accurate reconstructions in the coming years, in view of the predicted acquisitions of new data.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Chang Liu ◽  
Samad M.E. Sepasgozar ◽  
Sara Shirowzhan ◽  
Gelareh Mohammadi

Purpose The practice of artificial intelligence (AI) is increasingly being promoted by technology developers. However, its adoption rate is still reported as low in the construction industry due to a lack of expertise and the limited reliable applications for AI technology. Hence, this paper aims to present the detailed outcome of experimentations evaluating the applicability and the performance of AI object detection algorithms for construction modular object detection. Design/methodology/approach This paper provides a thorough evaluation of two deep learning algorithms for object detection, including the faster region-based convolutional neural network (faster RCNN) and single shot multi-box detector (SSD). Two types of metrics are also presented; first, the average recall and mean average precision by image pixels; second, the recall and precision by counting. To conduct the experiments using the selected algorithms, four infrastructure and building construction sites are chosen to collect the required data, including a total of 990 images of three different but common modular objects, including modular panels, safety barricades and site fences. Findings The results of the comprehensive evaluation of the algorithms show that the performance of faster RCNN and SSD depends on the context that detection occurs. Indeed, surrounding objects and the backgrounds of the objects affect the level of accuracy obtained from the AI analysis and may particularly effect precision and recall. The analysis of loss lines shows that the loss lines for selected objects depend on both their geometry and the image background. The results on selected objects show that faster RCNN offers higher accuracy than SSD for detection of selected objects. Research limitations/implications The results show that modular object detection is crucial in construction for the achievement of the required information for project quality and safety objectives. The detection process can significantly improve monitoring object installation progress in an accurate and machine-based manner avoiding human errors. The results of this paper are limited to three construction sites, but future investigations can cover more tasks or objects from different construction sites in a fully automated manner. Originality/value This paper’s originality lies in offering new AI applications in modular construction, using a large first-hand data set collected from three construction sites. Furthermore, the paper presents the scientific evaluation results of implementing recent object detection algorithms across a set of extended metrics using the original training and validation data sets to improve the generalisability of the experimentation. This paper also provides the practitioners and scholars with a workflow on AI applications in the modular context and the first-hand referencing data.


2020 ◽  
Vol 12 (6) ◽  
pp. 980
Author(s):  
Hao Sun ◽  
Baichi Zhou ◽  
Chuanjun Zhang ◽  
Hongxing Liu ◽  
Bo Yang

Improving the spatial resolution of microwave satellite soil moisture (SM) products is important for various applications. Most of the downscaling methods that fuse optical/thermal and microwave data rely on remotely sensed land surface temperature (LST) or LST-derived SM indexes (SMIs). However, these methods suffer from the problems of “cloud contamination”, “decomposing uncertainty”, and “decoupling effect”. This study presents a new downscaling method, referred to as DSCALE_mod16, without using LST and LST-derived SMIs. This model combines MODIS ET products and a gridded meteorological data set to obtain Land surface Evaporative Efficiency (LEE) as the main downscaling factor. A cosine-square form of downscaling function was adopted to represent the quantitative relationship between LEE and SM. Taking the central part of the United States as the case study area, we downscaled SMAP (Soil Moisture Active and Passive) SM products with an original resolution of 36km to a resolution of 500m. The study period spans more than three years from 2015 to 2018. In situ SM measurements from three sparse networks and three core validation sites (CVS) were used to evaluate the downscaling model. The evaluation results indicate that the downscaled SM values maintain the spatial dynamic range of original SM data while providing more spatial details. Moreover, the moisture mass is conserved during the downscaling process. The downscaled SM values have a good agreement with in situ SM measurements. The unbiased root-mean-square errors (ubRMSEs) of downscaled SM values is 0.035 m3/m3 at Fort Cobb, 0.026 m3/m3 at Little Washita, and 0.055 m3/m3 at South Fork, which are comparable to ubRMSEs of original SM estimates at these three CVS.


2018 ◽  
Vol 34 (4) ◽  
pp. 935-960 ◽  
Author(s):  
Maarten Vanhoof ◽  
Fernando Reis ◽  
Thomas Ploetz ◽  
Zbigniew Smoreda

Abstract Mobile phone data are an interesting new data source for official statistics. However, multiple problems and uncertainties need to be solved before these data can inform, support or even become an integral part of statistical production processes. In this article, we focus on arguably the most important problem hindering the application of mobile phone data in official statistics: detecting home locations. We argue that current efforts to detect home locations suffer from a blind deployment of criteria to define a place of residence and from limited validation possibilities. We support our argument by analysing the performance of five home detection algorithms (HDAs) that have been applied to a large, French, Call Detailed Record (CDR) data set (~18 million users, five months). Our results show that criteria choice in HDAs influences the detection of home locations for up to about 40% of users, that HDAs perform poorly when compared with a validation data set (resulting in 358-gap), and that their performance is sensitive to the time period and the duration of observation. Based on our findings and experiences, we offer several recommendations for official statistics. If adopted, our recommendations would help ensure more reliable use of mobile phone data vis-à-vis official statistics.


2013 ◽  
Vol 99 (4) ◽  
pp. 40-45 ◽  
Author(s):  
Aaron Young ◽  
Philip Davignon ◽  
Margaret B. Hansen ◽  
Mark A. Eggen

ABSTRACT Recent media coverage has focused on the supply of physicians in the United States, especially with the impact of a growing physician shortage and the Affordable Care Act. State medical boards and other entities maintain data on physician licensure and discipline, as well as some biographical data describing their physician populations. However, there are gaps of workforce information in these sources. The Federation of State Medical Boards' (FSMB) Census of Licensed Physicians and the AMA Masterfile, for example, offer valuable information, but they provide a limited picture of the physician workforce. Furthermore, they are unable to shed light on some of the nuances in physician availability, such as how much time physicians spend providing direct patient care. In response to these gaps, policymakers and regulators have in recent years discussed the creation of a physician minimum data set (MDS), which would be gathered periodically and would provide key physician workforce information. While proponents of an MDS believe it would provide benefits to a variety of stakeholders, an effort has not been attempted to determine whether state medical boards think it is important to collect physician workforce data and if they currently collect workforce information from licensed physicians. To learn more, the FSMB sent surveys to the executive directors at state medical boards to determine their perceptions of collecting workforce data and current practices regarding their collection of such data. The purpose of this article is to convey results from this effort. Survey findings indicate that the vast majority of boards view physician workforce information as valuable in the determination of health care needs within their state, and that various boards are already collecting some data elements. Analysis of the data confirms the potential benefits of a physician minimum data set (MDS) and why state medical boards are in a unique position to collect MDS information from physicians.


BMJ Open ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. e040778
Author(s):  
Vineet Kumar Kamal ◽  
Ravindra Mohan Pandey ◽  
Deepak Agrawal

ObjectiveTo develop and validate a simple risk scores chart to estimate the probability of poor outcomes in patients with severe head injury (HI).DesignRetrospective.SettingLevel-1, government-funded trauma centre, India.ParticipantsPatients with severe HI admitted to the neurosurgery intensive care unit during 19 May 2010–31 December 2011 (n=946) for the model development and further, data from same centre with same inclusion criteria from 1 January 2012 to 31 July 2012 (n=284) for the external validation of the model.Outcome(s)In-hospital mortality and unfavourable outcome at 6 months.ResultsA total of 39.5% and 70.7% had in-hospital mortality and unfavourable outcome, respectively, in the development data set. The multivariable logistic regression analysis of routinely collected admission characteristics revealed that for in-hospital mortality, age (51–60, >60 years), motor score (1, 2, 4), pupillary reactivity (none), presence of hypotension, basal cistern effaced, traumatic subarachnoid haemorrhage/intraventricular haematoma and for unfavourable outcome, age (41–50, 51–60, >60 years), motor score (1–4), pupillary reactivity (none, one), unequal limb movement, presence of hypotension were the independent predictors as its 95% confidence interval (CI) of odds ratio (OR)_did not contain one. The discriminative ability (area under the receiver operating characteristic curve (95% CI)) of the score chart for in-hospital mortality and 6 months outcome was excellent in the development data set (0.890 (0.867 to 912) and 0.894 (0.869 to 0.918), respectively), internal validation data set using bootstrap resampling method (0.889 (0.867 to 909) and 0.893 (0.867 to 0.915), respectively) and external validation data set (0.871 (0.825 to 916) and 0.887 (0.842 to 0.932), respectively). Calibration showed good agreement between observed outcome rates and predicted risks in development and external validation data set (p>0.05).ConclusionFor clinical decision making, we can use of these score charts in predicting outcomes in new patients with severe HI in India and similar settings.


Sign in / Sign up

Export Citation Format

Share Document