Evaluating the Effects of Ovality on the Integrity of Pipe Bends

Author(s):  
Chris Alexander

This paper provides details on a study performed for a liquids pipeline operator to evaluate the effects of ovality on the mechanical integrity of pipe bends in their 16-inch pipe system. Prior to this study, a caliper tool was run that indicated unacceptable ovality was present in the bends relative to the requirements set forth in ASME B31.4. An engineering investigation was performed based on the methodology of API 579 Fitness for Service. This standard provides guidance on evaluating defects using a multi-level assessment approach (Levels 1, 2, and 3) that rewards rigorous evaluation efforts by reducing the required design margins. Therefore, an extensive evaluation was performed that involved making field measurements of the bends in the ditch. Using these ovality measurements, calculations were performed using the closed-form equations in API 579 for Level 2 assessment. The ovality of several of the bends in the field was deemed unacceptable based on in-field measurements. Consequently, a Level 3 assessment was completed using finite element analysis (FEA). The results of this more rigorous analysis, coupled with more favorable design margins, resulted in this particular bend being acceptable. A tool was developed to permit a general assessment of pipe bends having ovality and was validated by performing a full-scale burst test.

Author(s):  
Henry Kwok ◽  
Simon Yuen ◽  
Jorge Penso

The overall framework for a Level 2 assessment of local thermal hot spot in pressure vessels was first developed by Seshadri [1]. The assessment procedure invokes the concept of integral mean of yield and the concept on a reference volume to determine the reduction of load capacity caused by hot spot damage. This paper investigates the accuracy of this assessment by comparing the results of the Level 2 assessment with a Level 3 assessment (inelastic finite element analysis). Three examples with varying pressure component and hot spot sizes are considered. The comparison yielded a low variance between the Level 2 and Level 3 assessments with the Level 2 assessment being more conservative.


2020 ◽  
Vol 9 (4) ◽  
pp. 1461-1467
Author(s):  
Indrarini Dyah Irawati ◽  
Sugondo Hadiyoso ◽  
Yuli Sun Hariyani

In this study, we proposed compressive sampling for MRI reconstruction based on sparse representation using multi-wavelet transformation. Comparing the performance of wavelet decomposition level, which are Level 1, Level 2, Level 3, and Level 4. We used gaussian random process to generate measurement matrix. The algorithm used to reconstruct the image is . The experimental results showed that the use of wavelet multi-level can generate higher compression ratio but requires a longer processing time. MRI reconstruction results based on the parameters of the peak signal to noise ratio (PSNR) and structural similarity index measure (SSIM) show that the higher the level of decomposition in wavelets, the value of both decreases.


Author(s):  
David Kemp ◽  
Justin Gossard ◽  
Shane Finneran ◽  
Joseph Bratton

Pipe ovalization, a deviation from the circular nominal cross section, is a common occurrence during the manufacturing of pipe sections. Additionally, ovalization can also occur in pipelines during and after installation and construction. CSA Z662-11 [1] provides an acceptance criteria of 5% for pipeline ovality in bends, however there is a variation in acceptance criteria for pipe ovality occurring in straight pipe sections. An industry review of pipeline design, operation, and maintenance codes was conducted to determine the industry acceptance for ovality limits in straight pipe sections. Based upon this industry review, the ovality limits were evaluated against constructability limits, limitations for passage of in-line-inspection (ILI) tools, as well as evaluating the stress in an ovalized pipe section compared to the maximum allowable stress of the pipe. During this review, it was revealed that allowable stress was the limiting factor for pipeline ovality, compared to constructability and ILI tool passage, thus this paper primarily discusses limitations related to remaining strength for ovalized pipe sections. The API 579 Fitness-for-Service assessment was used to evaluate varying levels of ovality to determine acceptability criteria for ovalization in straight pipe. The criteria was first established using a level 2 Fitness-for-Service assessment, which was then evaluated with a level 3 assessment using finite element analysis. This criterion was evaluated using multiple pipeline diameters and wall thickness in order to determine scalability.


2017 ◽  
Vol 8 (5) ◽  
pp. 530-543 ◽  
Author(s):  
Rachman Setiawan ◽  
Musthafa Akbar

Purpose Integrity assessment is used to ensure reliability operation of a pressurized equipment containing defects. Based on data of cylindrical shell dimensions, operation conditions, material properties and crack dimensions, an assessment can be carried out, using either Level 1, Level 2 or Level 3 procedure. Assessment using Level 3 procedure within the code requires a finite element simulation in order to generate both the evaluation point and the failure assessment diagram (FAD) that serves as the acceptance criteria. The purpose of this paper is to provide the numerical data which are used for integrity assessment of a pressure vessel containing crack. Here, a parametric study has been carried out to generate such result for the cases of longitudinal crack defect in a cylindrical shell for a number of common cases, in terms of thickness-to-radius ratio, crack size ratio and crack aspect ratio. Design/methodology/approach The evaluation of stress intensity factor is determined through J-integral parameter found using a finite element analysis with a specially meshed strategy incorporating the crack. A comparison is made against stress intensity factor provided by the code. Findings A good agreement is obtained with percent error of 2.13 percent for low aspect ratio crack, and 0.57 percent for high aspect ratio crack. Furthermore, a study has been carried out using the methodology for 160 cases, covering both cases already available in the code and other cases of crack in cylindrical shells. The result can be used as a complement to the existing tabular data available in the code for Level 2 assessment, to be used for integrity analysis of damaged cylindrical shells based on the FAD criteria. Originality/value The result can be used as a complement to the existing tabular data available in the API 579 code for Level 2 assessment, to be used for integrity analysis of damaged cylindrical shells based on the FAD criteria. New equations were generated based on finite element analysis and can be used for Level 3 assessment of the code.


1998 ◽  
Vol 10 (1-3) ◽  
pp. 57-72 ◽  
Author(s):  
K. S. B. Keats-Rohan

The COEL database and database software, a combined reference and research tool created by historians for historians, is presented here through Screenshots illustrating the underlying theoretical model and the specific situation to which that has been applied. The key emphases are upon data integrity, and the historian's role in interpreting and manipulating what is often contentious data. From a corpus of sources (Level 1) certain core data are extracted for separate treatment at an interpretive level (Level 3), based upon a master list of the core data (Level 2). The core data are interdependent: each record in Level 2 is of interest in itself; and it either could or should be associated with an(other) record(s) as a specific entity. Sometimes the sources are ambiguous and the association is contentious, necessitating a probabilty-coding approach. The entities created by the association process can then be treated at a commentary level, introducing material external to the database, whether primary or secondary sources. A full discussion of the difficulties is provided within a synthesis of available information on the core data. Direct access to the source texts is only ever a mouse click away. Fully query able, COEL is formidable look-up and research tool for users of all levels, who remain free to exercise an alternative judgement on the associations of the core data. In principle, there is no limit on the type of text or core data that could be handled in such a system.


Author(s):  
Lania Muharsih ◽  
Ratih Saraswati

This study aims to determine the training evaluation at PT. Kujang Fertilizer. PT. Pupuk Kujang is a company engaged in the field of petrochemicals. Evaluation sheet of PT. Fertilizer Kujang is made based on Kirkpatrick's theory which consists of four levels of evaluation, namely reaction, learning, behavior, and results. At level 1, namely reaction, in the evaluation sheet is in accordance with the theory of Kirkpatrick, at level 2 that is learning should be held pretest and posttest but only made scale. At level 3, behavior, according to theory, but on assessment factor number 3, quantity and work productivity should not need to be included because they are included in level 4. At level 4, that is the result, here is still lacking to get a picture of the results of the training that has been carried out because only based on answers from superiors without evidence of any documents.   Keywords: Training Evaluation, Kirkpatrick Theory.    Penelitian ini bertujuan mengetahui evaluasi training di PT. Pupuk Kujang. PT. Pupuk Kujang merupakan perusahaan yang bergerak di bidang petrokimia. Lembar evaluasi PT. Pupuk Kujang dibuat berdasarkan teori Kirkpatrick yang terdiri dari empat level evaluasi, yaitu reaksi, learning, behavior, dan hasil. Pada level 1 yaitu reaksi, di lembar evaluasi tersebut sudah sesuai dengan teori dari Kirkpatrick, pada level 2 yaitu learning seharusnya diadakan pretest dan posttest namun hanya dibuatkan skala. Pada level 3 yaitu behavior, sudah sesuai teori namun pada faktor penilaian nomor 3 kuantitas dan produktivitas kerja semestinya tidak perlu dimasukkan karena sudah termasuk ke dalam level 4. Pada level 4 yaitu hasil, disini masih sangat kurang untuk mendapatkan gambaran hasil dari pelatihan yang sudah dilaksanakan karena hanya berdasarkan dari jawaban atasan tanpa bukti dokumen apapun.   Kata kunci: Evaluasi Pelatihan, Teori Kirkpatrick.


2020 ◽  
Vol 41 (9) ◽  
pp. 1035-1041
Author(s):  
Erika Y. Lee ◽  
Michael E. Detsky ◽  
Jin Ma ◽  
Chaim M. Bell ◽  
Andrew M. Morris

AbstractObjectives:Antibiotics are commonly used in intensive care units (ICUs), yet differences in antibiotic use across ICUs are unknown. Herein, we studied antibiotic use across ICUs and examined factors that contributed to variation.Methods:We conducted a retrospective cohort study using data from Ontario’s Critical Care Information System (CCIS), which included 201 adult ICUs and 2,013,397 patient days from January 2012 to June 2016. Antibiotic use was measured in days of therapy (DOT) per 1,000 patient days. ICU factors included ability to provide ventilator support (level 3) or not (level 2), ICU type (medical-surgical or other), and academic status. Patient factors included severity of illness using multiple-organ dysfunction score (MODS), ventilatory support, and central venous catheter (CVC) use. We analyzed the effect of these factors on variation in antibiotic use.Results:Overall, 269,351 patients (56%) received antibiotics during their ICU stay. The mean antibiotic use was 624 (range 3–1460) DOT per 1,000 patient days. Antibiotic use was significantly higher in medical-surgical ICUs compared to other ICUs (697 vs 410 DOT per 1,000 patient days; P < .0001) and in level 3 ICUs compared to level 2 ICUs (751 vs 513 DOT per 1,000 patient days; P < .0001). Higher antibiotic use was associated with higher severity of illness and intensity of treatment. ICU and patient factors explained 47% of the variation in antibiotic use across ICUs.Conclusions:Antibiotic use varies widely across ICUs, which is partially associated with ICUs and patient characteristics. These differences highlight the importance of antimicrobial stewardship to ensure appropriate use of antibiotics in ICU patients.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4118
Author(s):  
Leonardo F. Arias-Rodriguez ◽  
Zheng Duan ◽  
José de Jesús Díaz-Torres ◽  
Mónica Basilio Hazas ◽  
Jingshui Huang ◽  
...  

Remote Sensing, as a driver for water management decisions, needs further integration with monitoring water quality programs, especially in developing countries. Moreover, usage of remote sensing approaches has not been broadly applied in monitoring routines. Therefore, it is necessary to assess the efficacy of available sensors to complement the often limited field measurements from such programs and build models that support monitoring tasks. Here, we integrate field measurements (2013–2019) from the Mexican national water quality monitoring system (RNMCA) with data from Landsat-8 OLI, Sentinel-3 OLCI, and Sentinel-2 MSI to train an extreme learning machine (ELM), a support vector regression (SVR) and a linear regression (LR) for estimating Chlorophyll-a (Chl-a), Turbidity, Total Suspended Matter (TSM) and Secchi Disk Depth (SDD). Additionally, OLCI Level-2 Products for Chl-a and TSM are compared against the RNMCA data. We observed that OLCI Level-2 Products are poorly correlated with the RNMCA data and it is not feasible to rely only on them to support monitoring operations. However, OLCI atmospherically corrected data is useful to develop accurate models using an ELM, particularly for Turbidity (R2=0.7). We conclude that remote sensing is useful to support monitoring systems tasks, and its progressive integration will improve the quality of water quality monitoring programs.


Atmosphere ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 869
Author(s):  
Xiuguo Zou ◽  
Jiahong Wu ◽  
Zhibin Cao ◽  
Yan Qian ◽  
Shixiu Zhang ◽  
...  

In order to adequately characterize the visual characteristics of atmospheric visibility and overcome the disadvantages of the traditional atmospheric visibility measurement method with significant dependence on preset reference objects, high cost, and complicated steps, this paper proposed an ensemble learning method for atmospheric visibility grading based on deep neural network and stochastic weight averaging. An experiment was conducted using the scene of an expressway, and three visibility levels were set, i.e., Level 1, Level 2, and Level 3. Firstly, the EfficientNet was transferred to extract the abstract features of the images. Then, training and grading were performed on the feature sets through the SoftMax regression model. Subsequently, the feature sets were ensembled using the method of stochastic weight averaging to obtain the atmospheric visibility grading model. The obtained datasets were input into the grading model and tested. The grading model classified the results into three categories, with the grading accuracy being 95.00%, 89.45%, and 90.91%, respectively, and the average accuracy of 91.79%. The results obtained by the proposed method were compared with those obtained by the existing methods, and the proposed method showed better performance than those of other methods. This method can be used to classify the atmospheric visibility of traffic and reduce the incidence of traffic accidents caused by atmospheric visibility.


2020 ◽  
Vol 9 (1) ◽  
Author(s):  
Yuguo Qian ◽  
Weiqi Zhou ◽  
Steward T. A. Pickett ◽  
Wenjuan Yu ◽  
Dingpeng Xiong ◽  
...  

Abstract Background Cities are social-ecological systems characterized by remarkably high spatial and temporal heterogeneity, which are closely related to myriad urban problems. However, the tools to map and quantify this heterogeneity are lacking. We here developed a new three-level classification scheme, by considering ecosystem types (level 1), urban function zones (level 2), and land cover elements (level 3), to map and quantify the hierarchical spatial heterogeneity of urban landscapes. Methods We applied the scheme using an object-based approach for classification using very high spatial resolution imagery and a vector layer of building location and characteristics. We used a top-down classification procedure by conducting the classification in the order of ecosystem types, function zones, and land cover elements. The classification of the lower level was based on the results of the higher level. We used an object-based methodology to carry out the three-level classification. Results We found that the urban ecosystem type accounted for 45.3% of the land within the Shenzhen city administrative boundary. Within the urban ecosystem type, residential and industrial zones were the main zones, accounting for 38.4% and 33.8%, respectively. Tree canopy was the dominant element in Shenzhen city, accounting for 55.6% over all ecosystem types, which includes agricultural and forest. However, in the urban ecosystem type, the proportion of tree canopy was only 22.6% because most trees were distributed in the forest ecosystem type. The proportion of trees was 23.2% in industrial zones, 2.2% higher than that in residential zones. That information “hidden” in the usual statistical summaries scaled to the entire administrative unit of Shenzhen has great potential for improving urban management. Conclusions This paper has taken the theoretical understanding of urban spatial heterogeneity and used it to generate a classification scheme that exploits remotely sensed imagery, infrastructural data available at a municipal level, and object-based spatial analysis. For effective planning and management, the hierarchical levels of landscape classification (level 1), the analysis of use and cover by urban zones (level 2), and the fundamental elements of land cover (level 3), each exposes different respects relevant to city plans and management.


Sign in / Sign up

Export Citation Format

Share Document