Implementing Business Analytics Software to Optimize Coiled Tubing Operations: A Digital Approach to Operations Efficiency

2021 ◽  
Author(s):  
Xaymaca Bautista Alarcon ◽  
Carlos Torres

Abstract This paper describes how business analytics novel software tools combined with advanced data management techniques can be integrated to management workflows and decision-making processes of a Coiled Tubing Service Company (CTSC). An advanced data wrangling process (transforming and mapping data) was designed and implemented to unify current and historical coiled tubing operational data into a single data set. The latter was regularly updated with the latest operational information through an automated data querying process that eliminated the need for manually repeating the data wrangling. A Business Analytics Software (BAS) was used to accelerate the engineering of a Data Analytics (DA) process, identify correlations and trends, design data models, and create relevant Key Performance Indicators (KPI). Finally, BAS visualization tools were used to prepare and publish comprehensive cloud-based Business Intelligence (BI) dashboards and reports. The BI dashboard allows the coiled tubing company to quickly perform accurate and efficient analysis of the KPI trends of its coiled tubing units and well interventions. The cloud-based Dashboard enables the CTSC to: Effectively identify Coiled Tubing Strings utilization, costs, failures, vendors, and design performance trends based on factual data, thus enabling informed pipe management decisions. Clearly Identify high performance coiled tubing units and crews and make strategic decisions for high profile jobs. Identify equipment failures, non-productive time (NPT) trends and define, implement, and monitor maintenance and asset management strategies to tackle the failures with mayor impact on operations performance. Easily incorporate fresh operational data and detect record-breaking operations. Improve Customer Relationship Management by a quickly responding to customer inquiries for tailor-made operations performance reports. Decrease workload on repetitive processes by enabling reports generation with relevant and accurate data faster that previous methods in place. Quickly customize and create accurate documentation compiling relevant performance data in a consistent and standardized fashion. Perform well interventions technical analysis. The contemporary approach to integrate BI Software to coiled tubing operations is a step change in how service companies and operators are analyzing and monitoring performance (efficiency, safety, and quality) to differentiate from each other, optimize results and minimize costs. This paper describes an example of a simple yet sophisticated and effective way of incorporating digital tools in the oilfield services processes by utilizing in house talent and resources.

Author(s):  
John W. Coleman

In the design engineering of high performance electromagnetic lenses, the direct conversion of electron optical design data into drawings for reliable hardware is oftentimes difficult, especially in terms of how to mount parts to each other, how to tolerance dimensions, and how to specify finishes. An answer to this is in the use of magnetostatic analytics, corresponding to boundary conditions for the optical design. With such models, the magnetostatic force on a test pole along the axis may be examined, and in this way one may obtain priority listings for holding dimensions, relieving stresses, etc..The development of magnetostatic models most easily proceeds from the derivation of scalar potentials of separate geometric elements. These potentials can then be conbined at will because of the superposition characteristic of conservative force fields.


2007 ◽  
Vol 2 (2) ◽  
Author(s):  
S.E. Walters ◽  
D. Savic ◽  
R.J. Hocking

The water industry over the years has primarily focussed on upgrading and investing in clean water provision. However, as research into the science and management of clean water services has progressed rapidly, wastewater provision and services has been slower. Focus, though, is now shifting within Industry and Research into wastewater services. The water regulator, Ofwat, for England and Wales demands the Sewerage Undertakers demonstrate efficient management of wastewater systems in order to obtain funding for Capital Investment projects. South West Water, a Water Service Provider and Sewerage Undertaker located in the South West of England, identified a need gap in their asset management strategies for wastewater catchments. This paper will introduce the production of a Decision Support Tool, DST, to help SWW proactively manage their Wastewater Catchments, examining Sewage Treatment Works, Pumping Stations and Networks. The paper will discuss some concepts within the DST, its production, testing and a brief case study. The DST provides a framework for prioritising catchments to optimise investment choices and actions. The Tool ranks catchments utilising Compromise Programming, CP, as well as AHP Pair-wise comparisons for preference weights. The DST incorporates Asset models, a Whole life Costing Module, as well as a Decay and Intervention Module.


2019 ◽  
Vol 13 (1) ◽  
pp. 20-27 ◽  
Author(s):  
Srishty Jindal ◽  
Kamlesh Sharma

Background: With the tremendous increase in the use of social networking sites for sharing the emotions, views, preferences etc. a huge volume of data and text is available on the internet, there comes the need for understanding the text and analysing the data to determine the exact intent behind the same for a greater good. This process of understanding the text and data involves loads of analytical methods, several phases and multiple techniques. Efficient use of these techniques is important for an effective and relevant understanding of the text/data. This analysis can in turn be very helpful in ecommerce for targeting audience, social media monitoring for anticipating the foul elements from society and take proactive actions to avoid unethical and illegal activities, business analytics, market positioning etc. Method: The goal is to understand the basic steps involved in analysing the text data which can be helpful in determining sentiments behind them. This review provides detailed description of steps involved in sentiment analysis with the recent research done. Patents related to sentiment analysis and classification are reviewed to throw some light in the work done related to the field. Results: Sentiment analysis determines the polarity behind the text data/review. This analysis helps in increasing the business revenue, e-health, or determining the behaviour of a person. Conclusion: This study helps in understanding the basic steps involved in natural language understanding. At each step there are multiple techniques that can be applied on data. Different classifiers provide variable accuracy depending upon the data set and classification technique used.


Author(s):  
C. Sauer ◽  
F. Bagusat ◽  
M.-L. Ruiz-Ripoll ◽  
C. Roller ◽  
M. Sauer ◽  
...  

AbstractThis work aims at the characterization of a modern concrete material. For this purpose, we perform two experimental series of inverse planar plate impact (PPI) tests with the ultra-high performance concrete B4Q, using two different witness plate materials. Hugoniot data in the range of particle velocities from 180 to 840 m/s and stresses from 1.1 to 7.5 GPa is derived from both series. Within the experimental accuracy, they can be seen as one consistent data set. Moreover, we conduct corresponding numerical simulations and find a reasonably good agreement between simulated and experimentally obtained curves. From the simulated curves, we derive numerical Hugoniot results that serve as a homogenized, mean shock response of B4Q and add further consistency to the data set. Additionally, the comparison of simulated and experimentally determined results allows us to identify experimental outliers. Furthermore, we perform a parameter study which shows that a significant influence of the applied pressure dependent strength model on the derived equation of state (EOS) parameters is unlikely. In order to compare the current results to our own partially reevaluated previous work and selected recent results from literature, we use simulations to numerically extrapolate the Hugoniot results. Considering their inhomogeneous nature, a consistent picture emerges for the shock response of the discussed concrete and high-strength mortar materials. Hugoniot results from this and earlier work are presented for further comparisons. In addition, a full parameter set for B4Q, including validated EOS parameters, is provided for the application in simulations of impact and blast scenarios.


2021 ◽  
pp. 016555152110184
Author(s):  
Gunjan Chandwani ◽  
Anil Ahlawat ◽  
Gaurav Dubey

Document retrieval plays an important role in knowledge management as it facilitates us to discover the relevant information from the existing data. This article proposes a cluster-based inverted indexing algorithm for document retrieval. First, the pre-processing is done to remove the unnecessary and redundant words from the documents. Then, the indexing of documents is done by the cluster-based inverted indexing algorithm, which is developed by integrating the piecewise fuzzy C-means (piFCM) clustering algorithm and inverted indexing. After providing the index to the documents, the query matching is performed for the user queries using the Bhattacharyya distance. Finally, the query optimisation is done by the Pearson correlation coefficient, and the relevant documents are retrieved. The performance of the proposed algorithm is analysed by the WebKB data set and Twenty Newsgroups data set. The analysis exposes that the proposed algorithm offers high performance with a precision of 1, recall of 0.70 and F-measure of 0.8235. The proposed document retrieval system retrieves the most relevant documents and speeds up the storing and retrieval of information.


Author(s):  
Stephen T. Benedict ◽  
Thomas P. Knight

The hydraulic design of bridges is a discipline that requires a strong measure of engineering judgment. Developing good engineering judgment can take years of experience, and generally increases one project at a time. A supplemental tool that can promote the development of engineering knowledge and judgment is to compile, analyze, and graphically present hydraulic data associated with stream and bridge-design characteristics from previously analyzed bridges. If the data set is sufficiently large, graphs developed from such an effort can provide the engineer with an enhanced picture of stream and bridge-design characteristics, helping them further develop their engineering knowledge and judgment. Furthermore, such graphs can function as project scoping tools and hydraulic-design review tools. Using selected data from approximately 300 bridge-scour studies in South Carolina, previously conducted by the U.S. Geological Survey, and limited hydraulic bridge-design data for approximately 200 bridges in South Carolina, trends in stream and bridge-hydraulic characteristics were evaluated including channel width, floodplain width, flood flow depths, stream slopes, bridge backwater, bridge flow velocity, and bridge lengths. Selected relationships are presented in this paper and should serve as a valuable tool for better understanding stream and bridge-hydraulic characteristics in South Carolina.


Author(s):  
Adrienne M Stilp ◽  
Leslie S Emery ◽  
Jai G Broome ◽  
Erin J Buth ◽  
Alyna T Khan ◽  
...  

Abstract Genotype-phenotype association studies often combine phenotype data from multiple studies to increase power. Harmonization of the data usually requires substantial effort due to heterogeneity in phenotype definitions, study design, data collection procedures, and data set organization. Here we describe a centralized system for phenotype harmonization that includes input from phenotype domain and study experts, quality control, documentation, reproducible results, and data sharing mechanisms. This system was developed for the National Heart, Lung and Blood Institute’s Trans-Omics for Precision Medicine program, which is generating genomic and other omics data for >80 studies with extensive phenotype data. To date, 63 phenotypes have been harmonized across thousands of participants from up to 17 studies per phenotype (participants recruited 1948-2012). We discuss challenges in this undertaking and how they were addressed. The harmonized phenotype data and associated documentation have been submitted to National Institutes of Health data repositories for controlled-access by the scientific community. We also provide materials to facilitate future harmonization efforts by the community, which include (1) the code used to generate the 63 harmonized phenotypes, enabling others to reproduce, modify or extend these harmonizations to additional studies; and (2) results of labeling thousands of phenotype variables with controlled vocabulary terms.


2018 ◽  
Vol 10 (8) ◽  
pp. 80
Author(s):  
Lei Zhang ◽  
Xiaoli Zhi

Convolutional neural networks (CNN for short) have made great progress in face detection. They mostly take computation intensive networks as the backbone in order to obtain high precision, and they cannot get a good detection speed without the support of high-performance GPUs (Graphics Processing Units). This limits CNN-based face detection algorithms in real applications, especially in some speed dependent ones. To alleviate this problem, we propose a lightweight face detector in this paper, which takes a fast residual network as backbone. Our method can run fast even on cheap and ordinary GPUs. To guarantee its detection precision, multi-scale features and multi-context are fully exploited in efficient ways. Specifically, feature fusion is used to obtain semantic strongly multi-scale features firstly. Then multi-context including both local and global context is added to these multi-scale features without extra computational burden. The local context is added through a depthwise separable convolution based approach, and the global context by a simple global average pooling way. Experimental results show that our method can run at about 110 fps on VGA (Video Graphics Array)-resolution images, while still maintaining competitive precision on WIDER FACE and FDDB (Face Detection Data Set and Benchmark) datasets as compared with its state-of-the-art counterparts.


2021 ◽  
Author(s):  
Oliver Stenzel ◽  
Robin Thor ◽  
Martin Hilchenbach

<p>Orbital Laser altimeters deliver a plethora of data that is used to map planetary surfaces [1] and to understand interiors of solar system bodies [2]. Accuracy and precision of laser altimetry measurements depend on the knowledge of spacecraft position and pointing and on the instrument. Both are important for the retrieval of tidal parameters. In order to assess the quality of the altimeter retrievals, we are training and implementing an artificial neural network (ANN) to identify and exclude scans from analysis which yield erroneous data. The implementation is based on the PyTorch framework [3]. We are presenting our results for the MESSENGER Mercury Laser Altimeter (MLA) data set [4], but also in view of future analysis of the BepiColombo Laser Altimeter (BELA) data, which will arrive in orbit around Mercury in 2025 on board the Mercury Planetary Orbiter [5,6]. We further explore conventional methods of error identification and compare these with the machine learning results. Short periods of large residuals or large variation of residuals are identified and used to detect erroneous measurements. Furthermore, long-period systematics, such as those caused by slow variations in instrument pointing, can be modelled by including additional parameters.<br>[1] Zuber, Maria T., David E. Smith, Roger J. Phillips, Sean C. Solomon, Gregory A. Neumann, Steven A. Hauck, Stanton J. Peale, et al. ‘Topography of the Northern Hemisphere of Mercury from MESSENGER Laser Altimetry’. Science 336, no. 6078 (13 April 2012): 217–20. https://doi.org/10.1126/science.1218805.<br>[2] Thor, Robin N., Reinald Kallenbach, Ulrich R. Christensen, Philipp Gläser, Alexander Stark, Gregor Steinbrügge, and Jürgen Oberst. ‘Determination of the Lunar Body Tide from Global Laser Altimetry Data’. Journal of Geodesy 95, no. 1 (23 December 2020): 4. https://doi.org/10.1007/s00190-020-01455-8.<br>[3] Paszke, Adam, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, et al. ‘PyTorch: An Imperative Style, High-Performance Deep Learning Library’. Advances in Neural Information Processing Systems 32 (2019): 8026–37.<br>[4] Cavanaugh, John F., James C. Smith, Xiaoli Sun, Arlin E. Bartels, Luis Ramos-Izquierdo, Danny J. Krebs, Jan F. McGarry, et al. ‘The Mercury Laser Altimeter Instrument for the MESSENGER Mission’. Space Science Reviews 131, no. 1 (1 August 2007): 451–79. https://doi.org/10.1007/s11214-007-9273-4.<br>[5] Thomas, N., T. Spohn, J. -P. Barriot, W. Benz, G. Beutler, U. Christensen, V. Dehant, et al. ‘The BepiColombo Laser Altimeter (BELA): Concept and Baseline Design’. Planetary and Space Science 55, no. 10 (1 July 2007): 1398–1413. https://doi.org/10.1016/j.pss.2007.03.003.<br>[6] Benkhoff, Johannes, Jan van Casteren, Hajime Hayakawa, Masaki Fujimoto, Harri Laakso, Mauro Novara, Paolo Ferri, Helen R. Middleton, and Ruth Ziethe. ‘BepiColombo—Comprehensive Exploration of Mercury: Mission Overview and Science Goals’. Planetary and Space Science, Comprehensive Science Investigations of Mercury: The scientific goals of the joint ESA/JAXA mission BepiColombo, 58, no. 1 (1 January 2010): 2–20. https://doi.org/10.1016/j.pss.2009.09.020.</p>


Sign in / Sign up

Export Citation Format

Share Document