Multi Well Analysis Data Processing for Event Analysis and Mitigation

2021 ◽  
Author(s):  
Vallet Laurent ◽  
Gutarov Pavel ◽  
Chevallier Bertrand ◽  
Converset Julien ◽  
Paterson Graeme ◽  
...  

Abstract In the current economic environment, delivering wells on time and on budget is paramount. Well construction is a significant cost of any field development and it is more important than ever to minimize these costs and to avoid unnecessary lost time and non-productive time. Invisible lost time and non-productive time can represent as much as 40% of the cost of well construction and can lead to more severe issues such as delaying first oil, losing the well or environmental impact. There has been much work developing systems to optimize well construction, but the industry still fails to routinely detect and avoid problematic events such as stuck pipe, kicks, losses and washouts. Standardizing drilling practice can help also to improve the efficiency, this practice has shown a 30% cost reduction through repetitive and systematic practices, automation becomes the key process to realize it and Machine Learning introduced by new technologies is the key to achieve it. Drilling data analysis is key to understanding reasons for bad performances and detecting at an early stage potential downhole events. It can be done efficiently to provide to the user tools to look at the well construction process in its whole instead of looking at the last few hours as it is done at the rig site. In order to analyze the drilling data, it is necessary to have access to reliable data in Real-Time to compare with a data model considering the context (BHA, fluids, well geometry). Well planning, including multi-well offset analysis of risks, drilling processes and geology enables a user to look at the full well construction process and define levels of automation. This paper applies machine learning to a post multi-well analysis of a deepwater field development known for its drilling challenges. Minimizing the human input through automation allowed us to compare offset wells and to define the root cause for non-productive time. In our case study an increase of the pressure while drilling should have led to immediate mitigation measures to avoid a wiper trip. This paper presents techniques used to systematize surface data analysis and a workflow to identify at an early stage a near pack off which was spotted in an automatic way. The application of this process during operations could have achieved a 10%-time reduction of the section 12 ¼’’.

2021 ◽  
Author(s):  
Silvia Mora ◽  
Damian Martinez

Abstract Drilling is probably the most critical, complex, and costly operation in the oil and gas industry and unfortunately, errors made during the activities related are very expensive. Therefore, inefficient drilling activities such as connection duration outside of optimal times can have a considerable financial impact, so there is always a need to improve drilling efficiency. It is for this fact, that the measure of different behaviors and the duration of the drilling activities represent a significant opportunity in order to maximize the cost saving per well or campaign. Reducing the cost impact and maximizing the drilling efficiency are defined by the way used to calculate the perfect well time by the technical limit, non-productive time (NPT), and invisible lost time (ILT), in an operating company drilling plan. Different approaches to measure the invisible lost time that could be present in the in slips activity on the drilling operation are compared. Results show the differences between multiple techniques applied in real environments coming from a cloud platform. The methodologies implemented are based on the following scenarios, the first one use a combination of a custom technical limit based on technical experience, the historical data limit using standard measures (mean, average, quartiles, standard deviation, etc.), and a depth range variable (phases) differentiation, initial, intermediate, and final hole sizes is used. A complexity comparison uses the rig stand and phase footage variables for base line (count and duration) definition per phase, the non-productive time activities exclusion and data replace techniques mixing with an out of standard time detection in slips behavior (motor assemblies, bit replacing, bottom hole assembly (BHA), etc.) using standard and machine learning mechanisms. A final methodology implements an in slip ILT by technical limit definition using machine learning. The results using the same data set (set of wells) and coming from the different methods has been evaluated according to the total invisible lost time calculated per phase, percentage of activities evaluated with invisible lost time per phase and the variation of ILT considering the activities defining the technical limit. Finally, the potential implementation by any operator can be evaluated for these methodologies according to their specific requirements. This analysis creates a guideline to operating companies about multiple techniques to calculate ILT, some using innovative procedures applied on machine learning models.


Author(s):  
Anguraj.K Et al.

Agriculture plays a significant role in increasing the economic development of our nation. Crop production has greatly affected due to changes in weather pattern. Emerging technologies can be used to improve productivity of the crops by converting traditional farming to precision farming. The new technologies that are used include data analysis and Internet of things (IOT). The major issue yet to be resolved is cultivating precise crop at precise time. This can be done with the help machine learning algorithms which is found to be an effective method for predicting the suitable crop. The soil parameters such as soil moisture, temperature, humidity and pH are collected from the sensors using IOT and given to Graphical User Interface (GUI). GUI gets the inputs and suggests the suitable crops. The system developed using IOT and ML greatly helps the farmers to take a valuable decision.


2022 ◽  
Vol 2022 ◽  
pp. 1-7
Author(s):  
Baobao Dong ◽  
Xiangming Wang ◽  
Qi Cao

With the development of wireless network, communication technology, cloud platform, and Internet of Things (IOT), new technologies are gradually applied to the smart healthcare industry. The COVID-19 outbreak has brought more attention to the development of the emerging industry of smart healthcare. However, the development of this industry is restricted by factors such as long construction cycle, large investment in the early stage, and lagging return, and the listed companies also face the problem of financing difficulties. In this study, machine learning algorithm is used to predict performance, which can not only deal with a large amount of data and characteristic variables but also analyse different types of variables and predict their classification, increasing the stability and accuracy of the model and helping to solve the problem of poor performance prediction in the past. After analysing the sample data from 53 listed companies in smart healthcare industry, we argued that the conclusion of this study can not only provide reference for listed companies in smart healthcare industry to formulate their own strategies but also provide shareholders with strategies to avoid risks and help the development of this emerging industry.


Author(s):  
Mengyuan Li ◽  
Zhilan Zhang ◽  
Shanmei Jiang ◽  
Qian Liu ◽  
Canping Chen ◽  
...  

AbstractBackgroundAlthough COVID-19 has been well controlled in China, it is rapidly spreading outside the country and may have catastrophic results globally without implementation of necessary mitigation measures. Because the COVID-19 outbreak has made comprehensive and profound impacts on the world, an accurate prediction of its epidemic trend is significant. Although many studies have predicted the COVID-19 epidemic trend, most have used early-stage data and focused on Chinese cases.MethodsWe first built models to predict daily numbers of cumulative confirmed cases (CCCs), new cases (NCs), and death cases (DCs) of COVID-19 in China based on data from January 20, 2020, to March 1, 2020. Based on these models, we built models to predict the epidemic trend across the world (outside China). We also built models to predict the epidemic trend in Italy, Spain, Germany, France, UK, and USA where COVID-19 is rapidly spreading.ResultsThe COVID-19 outbreak will have peaked on February 22, 2020, in China and will peak on May 22, 2020, across the world. It will be basically under control in early April 2020 in China and late August 2020 across the world. The total number of COVID-19 cases will reach around 89,000 in China and 6,126,000 across the world during the epidemic. Around 4,000 and 290,000 people will die of COVID-19 in China and across the world, respectively. The COVID-19 outbreak will have peaked recently in Italy and will peak in Spain, Germany, France, UK, and USA within two weeks.ConclusionThe COVID-19 outbreak is controllable in the foreseeable future if comprehensive and stringent control measures are taken.


2018 ◽  
Vol 1 (1) ◽  
pp. 236-247
Author(s):  
Divya Srivastava ◽  
Rajitha B. ◽  
Suneeta Agarwal

Diseases in leaves can cause the significant reduction in both quality and quantity of agricultural production. If early and accurate detection of disease/diseases in leaves can be automated, then the proper remedy can be taken timely. A simple and computationally efficient approach is presented in this paper for disease/diseases detection on leaves. Only detecting the disease is not beneficial without knowing the stage of disease thus the paper also determine the stage of disease/diseases by quantizing the affected of the leaves by using digital image processing and machine learning. Though there exists a variety of diseases on leaves, but the bacterial and fungal spots (Early Scorch, Late Scorch, and Leaf Spot) are the most prominent diseases found on leaves. Keeping this in mind the paper deals with the detection of Bacterial Blight and Fungal Spot both at an early stage (Early Scorch) and late stage (Late Scorch) on the variety of leaves. The proposed approach is divided into two phases, in the first phase, it identifies one or more disease/diseases existing on leaves. In the second phase, amount of area affected by the disease/diseases is calculated. The experimental results obtained showed 97% accuracy using the proposed approach.


2020 ◽  
Vol 13 (5) ◽  
pp. 1020-1030
Author(s):  
Pradeep S. ◽  
Jagadish S. Kallimani

Background: With the advent of data analysis and machine learning, there is a growing impetus of analyzing and generating models on historic data. The data comes in numerous forms and shapes with an abundance of challenges. The most sorted form of data for analysis is the numerical data. With the plethora of algorithms and tools it is quite manageable to deal with such data. Another form of data is of categorical nature, which is subdivided into, ordinal (order wise) and nominal (number wise). This data can be broadly classified as Sequential and Non-Sequential. Sequential data analysis is easier to preprocess using algorithms. Objective: The challenge of applying machine learning algorithms on categorical data of nonsequential nature is dealt in this paper. Methods: Upon implementing several data analysis algorithms on such data, we end up getting a biased result, which makes it impossible to generate a reliable predictive model. In this paper, we will address this problem by walking through a handful of techniques which during our research helped us in dealing with a large categorical data of non-sequential nature. In subsequent sections, we will discuss the possible implementable solutions and shortfalls of these techniques. Results: The methods are applied to sample datasets available in public domain and the results with respect to accuracy of classification are satisfactory. Conclusion: The best pre-processing technique we observed in our research is one hot encoding, which facilitates breaking down the categorical features into binary and feeding it into an Algorithm to predict the outcome. The example that we took is not abstract but it is a real – time production services dataset, which had many complex variations of categorical features. Our Future work includes creating a robust model on such data and deploying it into industry standard applications.


2021 ◽  
Vol 200 ◽  
pp. 108377
Author(s):  
Bing Kong ◽  
Zhuoheng Chen ◽  
Shengnan Chen ◽  
Tianjie Qin

Sign in / Sign up

Export Citation Format

Share Document