Drilling Performance Improvement Through use of Artificial Intelligence in Bit and Bottom Hole Assembly Selection in Gulf of Thailand

2021 ◽  
Author(s):  
Nichnita Tortrakul ◽  
Chatwit Pochan ◽  
Steve Southland ◽  
Pimjai Mala ◽  
Tawpath Pichaichanlert ◽  
...  

Abstract This paper describes a method of transforming legacy manual bit/BHA planning process into a digital solution to enhance drilling assembly selection efficiency and consistency. The solution presented improves overall capital stewardship thru an effective and semi-automated use of data to deliver high quality decisions and improve decision consistency across drilling applications and drive drilling performance. Data science and machine learning is applied to streamline the data preparation process and present to the user a statistically sound drilling assembly solution for the drilling environment input. A large +6000 well database is used to explore alternatives and rank potential solutions using performance and directional compatibility characteristics unique to the Gulf of Thailand. The digital project goal presented is two-fold. The first is to streamline all related data and decision processes in the office to improve work efficiency and information accessibility. The second goal is to improve field drilling performance by installation of a self-learning advisory tool. There is a requirement for multiple sub-processes to work in parallel. The population of data in the database and quality checks must be automated to handle hourly/daily data updates. A system for auto-loading drilling data from rigsite was created. A second system containing data science and machine learning was created to identify similar wells, rank their respective performance and directional compatibility to a future well of interest, and offer a statistically relevant solution recommendation. A benefit of such a system is a more efficient workflow with improved field drilling results while effectively capturing Chevron Thailand methods for many drilling engineers to use in the future. Adopting agile concept during development phase is one of the keys to success for this project. Additionally, utilization of digital transformation technology is a key enabler to handle big data, data science and data foundation.

Aerospace ◽  
2020 ◽  
Vol 7 (6) ◽  
pp. 73 ◽  
Author(s):  
HyunKi Lee ◽  
Sasha Madar ◽  
Santusht Sairam ◽  
Tejas G. Puranik ◽  
Alexia P. Payan ◽  
...  

In recent years, there has been a rapid growth in the application of data science techniques that leverage aviation data collected from commercial airline operations to improve safety. This paper presents the application of machine learning to improve the understanding of risk factors during flight and their causal chains. With increasing complexity and volume of operations, rapid accumulation and analysis of this safety-related data has the potential to maintain and even lower the low global accident rates in aviation. This paper presents the development of an analytical methodology called Safety Analysis of Flight Events (SAFE) that synthesizes data cleaning, correlation analysis, classification-based supervised learning, and data visualization schema to streamline the isolation of critical parameters and the elimination of tangential factors for safety events in aviation. The SAFE methodology outlines a robust and repeatable framework that is applicable across heterogeneous data sets containing multiple aircraft, airport of operations, and phases of flight. It is demonstrated on Flight Operations Quality Assurance (FOQA) data from a commercial airline through use cases related to three safety events, namely Tire Speed Event, Roll Event, and Landing Distance Event. The application of the SAFE methodology yields a ranked list of critical parameters in line with subject-matter expert conceptions of these events for all three use cases. The work concludes by raising important issues about the compatibility levels of machine learning and human conceptualization of incidents and their precursors, and provides initial guidance for their reconciliation.


2021 ◽  
Author(s):  
Luc Thomès ◽  
Rebekka Burkholz ◽  
Daniel Bojar

AbstractAs a biological sequence, glycans occur in every domain of life and comprise monosaccharides that are chained together to form oligo- or polysaccharides. While glycans are crucial for most biological processes, existing analysis modalities make it difficult for researchers with limited computational background to include information from these diverse and nonlinear sequences into standard workflows. Here, we present glycowork, an open-source Python package that was designed for the processing and analysis of glycan data by end users, with a strong focus on glycan-related data science and machine learning. Glycowork includes numerous functions to, for instance, automatically annotate glycan motifs and analyze their distributions via heatmaps and statistical enrichment. We also provide visualization methods, routines to interact with stored databases, trained machine learning models, and learned glycan representations. We envision that glycowork can extract further insights from any glycan dataset and demonstrate this with several workflows that analyze glycan motifs in various biological contexts. Glycowork can be freely accessed at https://github.com/BojarLab/glycowork/.


2019 ◽  
Author(s):  
Marceli Lukaszewski ◽  
Rafal Lukaszewski ◽  
Kinga Kosiorowska ◽  
Marek Jasinski

Abstract Background Recent scientific reports have brought into light a new concept of goal-directed perfusion (GDP) that aims to recreate physiological conditions in which the risk of end-organ malperfusion is minimalized. The aim of our study was to analyse patients’ interim physiology while on cardiopulmonary bypass based on the haemodynamic and tissue oxygen delivery measurements. We also aimed to create a universal formula that may help in further implementation of the GDP concept. Methods We retrospectively analysed patients operated on at the Wroclaw University Hospital between June 2017 and December 2018. Since our observations provided an extensive amount of data, including the patients' demographics, surgery details and the perfusion-related data, the Data Science methodology was applied. Results A total of 272 (mean age 62.5±12.4, 74% male) cardiac surgery patients were included in the study. To study the relationship between haemodynamic and tissue oxygen parameters, the data for three different values of DO2i (280 ml/min/m2, 330 ml/min/m2 and 380ml/min/m2), were evaluated. Each set of those lines showed a descending function of CI in Hb concentration for the set DO2i. Conclusions Modern calculation tools make it possible to create a common data platform from a very large database. Using that methodology we created models of haemodynamic compounds describing tissue oxygen delivery. The obtained unique patterns may both allow the adaptation of the flow in relation to the patient’s unique morphology that changes in time and contribute to wider and safer implementation of perfusion strategy which has been tailored to every patient’s individual needs.


2020 ◽  
Vol 11 (35) ◽  
pp. 9665-9674
Author(s):  
Steven M. Maley ◽  
Doo-Hyun Kwon ◽  
Nick Rollins ◽  
Johnathan C. Stanley ◽  
Orson L. Sydora ◽  
...  

The use of data science tools to provide the emergence of non-trivial chemical features for catalyst design is an important goal in catalysis science.


2019 ◽  
Author(s):  
Marceli Lukaszewski ◽  
Rafal Lukaszewski ◽  
Kinga Kosiorowska ◽  
Marek Jasinski

Abstract Background Recent scientific reports brought into light a new concept of goal-directed perfusion (GDP) that aims to recreate physiological conditions in which the risk of end-organ malperfusion is minimalized. The aim of our study was to analyse patient’s interim physiology while on cardiopulmonary bypass based on the haemodynamic and tissue oxygen delivery measurements. We also aimed to create a universal formula that may help in the further implementation of the GDP concept. Methods We retrospectively analysed patients operated at Wroclaw University Hospital between June 2017 and December 2018. Since our observations provided a huge amount of data, including the patient's demographics, surgery details, and the perfusion-related data, the Data Science methodology was applied. Results A total of 272 (mean age 62.5±12.4, 74% male) cardiac surgery patients were included in the study. To study the relationship between haemodynamic and tissue oxygen parameters, the data for three different values of DO2i (280 ml/min/m2, 330 ml/min/m2 and 380ml/min/m2), were evaluated. Each set of those lines showed a descending function of CI in Hb concentration for the set DO2i. Conclusions Modern calculation tools make it possible to create a common data platform from a very large database. Using that methodology, we created models of haemodynamic compounds describing tissue oxygen delivery. The obtained unique patterns may both allow the adaptation of the flow in relation to the patient’s unique morphology that changes in time and contributes to the wider and safer implementation of tailored to individual patient’s needs perfusion strategy.


2019 ◽  
Vol 19 (1) ◽  
Author(s):  
Marceli Lukaszewski ◽  
Rafal Lukaszewski ◽  
Kinga Kosiorowska ◽  
Marek Jasinski

Abstract Background Recent scientific reports have brought into light a new concept of goal-directed perfusion (GDP) that aims to recreate physiological conditions in which the risk of end-organ malperfusion is minimalized. The aim of our study was to analyse patients’ interim physiology while on cardiopulmonary bypass based on the haemodynamic and tissue oxygen delivery measurements. We also aimed to create a universal formula that may help in further implementation of the GDP concept. Methods We retrospectively analysed patients operated on at the Wroclaw University Hospital between June 2017 and December 2018. Since our observations provided an extensive amount of data, including the patients’ demographics, surgery details and the perfusion-related data, the Data Science methodology was applied. Results A total of 272 (mean age 62.5 ± 12.4, 74% male) cardiac surgery patients were included in the study. To study the relationship between haemodynamic and tissue oxygen parameters, the data for three different values of DO2i (280 ml/min/m2, 330 ml/min/m2 and 380 ml/min/m2), were evaluated. Each set of those lines showed a descending function of CI in Hb concentration for the set DO2i. Conclusions Modern calculation tools make it possible to create a common data platform from a very large database. Using that methodology we created models of haemodynamic compounds describing tissue oxygen delivery. The obtained unique patterns may both allow the adaptation of the flow in relation to the patient’s unique morphology that changes in time and contribute to wider and safer implementation of perfusion strategy which has been tailored to every patient’s individual needs.


2021 ◽  
Vol 5 (2) ◽  
pp. 369-378
Author(s):  
Eka Pandu Cynthia ◽  
M. Afif Rizky A. ◽  
Alwis Nazir ◽  
Fadhilah Syafria

This paper explains the use of the Random Forest Algorithm to investigate the Case of Acute Coronary Syndrome (ACS). The objectives of this study are to review the evaluation of the use of data science techniques and machine learning algorithms in creating a model that can classify whether or not cases of acute coronary syndrome occur. The research method used in this study refers to the IBM Foundational Methodology for Data Science, include: i) inventorying dataset about ACS, ii) preprocessing for the data into four sub-processes, i.e. requirements, collection, understanding, and preparation, iii) determination of RFA, i.e. the "n" of the tree which will form a forest and forming trees from the random forest that has been created, and iv) determination of the model evaluation and result in analysis based on Python programming language. Based on the experiments that the learning have been conducted using a random forest machine-learning algorithm with an n-estimator value of 100 and each tree's depth (max depth) with a value of 4, learning scenarios of 70:30, 80:20, and 90:10 on 444 cases of acute coronary syndrome data. The results show that the 70:30 scenario model has the best results, with an accuracy value of 83.45%, a precision value of 85%, and a recall value of 92.4%. Conclusions obtained from the experiment results were evaluated with various statistical metrics (accuracy, precision, and recall) in each learning scenario on 444 cases of acute coronary syndrome data with a cross-validation value of 10 fold.


i-com ◽  
2020 ◽  
Vol 19 (3) ◽  
pp. 215-226
Author(s):  
Maria Rauschenberger ◽  
Ricardo Baeza-Yates

Abstract When discussing interpretable machine learning results, researchers need to compare them and check for reliability, especially for health-related data. The reason is the negative impact of wrong results on a person, such as in wrong prediction of cancer, incorrect assessment of the COVID-19 pandemic situation, or missing early screening of dyslexia. Often only small data exists for these complex interdisciplinary research projects. Hence, it is essential that this type of research understands different methodologies and mindsets such as the Design Science Methodology, Human-Centered Design or Data Science approaches to ensure interpretable and reliable results. Therefore, we present various recommendations and design considerations for experiments that help to avoid over-fitting and biased interpretation of results when having small imbalanced data related to health. We also present two very different use cases: early screening of dyslexia and event prediction in multiple sclerosis.


2019 ◽  
Author(s):  
Marceli Lukaszewski ◽  
Rafal Lukaszewski ◽  
Kinga Kosiorowska ◽  
Marek Jasinski

Abstract Background Recent scientific reports brought into light a new concept of goal-directed perfusion (GDP) that aims to recreate physiological conditions in which the risk of end-organ malperfusion is minimalized. The aim of our study was to analyse patient’s interim physiology while on cardiopulmonary bypass based on the haemodynamic and tissue oxygen delivery measurements. We also aimed to create a universal formula that may help in the further implementation of the GDP concept. Methods We retrospectively analysed patients operated at Wroclaw University Hospital between June 2017 and December 2018. Since our observations provided a huge amount of data, including the patient's demographics, surgery details, and the perfusion-related data, the Data Science methodology was applied. Results A total of 272 (mean age 62.5±12.4, 74% male) cardiac surgery patients were included in the study. To study the relationship between haemodynamic and tissue oxygen parameters, the data for three different values of DO2i (280 ml/min/m2, 330 ml/min/m2 and 380ml/min/m2), were evaluated. Each set of those lines showed a descending function of CI in Hb concentration for the set DO2i. Conclusions Modern calculation tools make it possible to create a common data platform from a very large database. Using that methodology, we created models of haemodynamic compounds describing tissue oxygen delivery. The obtained unique patterns may both allow the adaptation of the flow in relation to the patient’s unique morphology that changes in time and contributes to the wider and safer implementation of tailored to individual patient’s needs perfusion strategy.


Sign in / Sign up

Export Citation Format

Share Document