Looking Ahead of the Bit Using Surface Drilling and Petrophysical Data: Machine-Learning-Based Real-Time Geosteering in Volve Field

SPE Journal ◽  
2020 ◽  
Vol 25 (02) ◽  
pp. 990-1006 ◽  
Author(s):  
Ishank Gupta ◽  
Ngoc Tran ◽  
Deepak Devegowda ◽  
Vikram Jayaram ◽  
Chandra Rai ◽  
...  

Summary Petroleum reservoirs are often associated with multiple target zones or a single zone adjacent to nonproductive intervals. Real-time geosteering therefore becomes important to remain in zone or to dynamically steer toward a target. This requires knowledge of the petrophysical/rock mechanical properties of the rock surrounding the bit. Although logging while drilling can provide this information, a cost-effective and almost-real-time solution is lacking. In general, there is a depth lag, and therefore, a time delay, between what the logging-while-drilling sub relays to the surface and the bit performance. This study focuses on relating drill-bit- and drillstring-performance data in a machine-learning (ML) workflow to predict the lithology at the bit while drilling. The method we are proposing offers several advantages in terms of cost and time savings for real-time geosteering applications, where going out of zone requires costly intervention. In this study, we have used a public data set from Volve Field on the Norwegian continental shelf. Within our proposed workflow, as a first step, logs sensitive to lithology [such as density, gamma ray (GR), and sonic] are grouped into three electrofacies. We also had access to core data, which helped us interpret the electrofacies in terms of mineralogy. The three electrofacies corresponded to quartz-rich (sandstone/siltstone), clay-rich (shale), and carbonate-rich (limestone) lithologies. The next step is to predict the electrofacies using various measurement-while-drilling (MWD) variables, such as rate of penetration (ROP), weight on bit (WOB), and several others that are monitored in real time. Supervised classification algorithms were used to relate real-time surface measurements to lithology. The algorithms were able to predict lithology in test wells with more than 80% accuracy. These results, although encouraging, constitute a small step toward drilling-automation/advisory systems. The development of such systems can prevent costly out-of-zone drilling and minimize rig time and equipment use, thereby potentially reducing capital expenditures. This study was specifically performed in Volve Field in the North Sea using petrophysical and surface drilling data from vertical wells. However, the workflow has a potential to be extended to other formations in other fields in different well types.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jiawei Lian ◽  
Junhong He ◽  
Yun Niu ◽  
Tianze Wang

Purpose The current popular image processing technologies based on convolutional neural network have the characteristics of large computation, high storage cost and low accuracy for tiny defect detection, which is contrary to the high real-time and accuracy, limited computing resources and storage required by industrial applications. Therefore, an improved YOLOv4 named as YOLOv4-Defect is proposed aim to solve the above problems. Design/methodology/approach On the one hand, this study performs multi-dimensional compression processing on the feature extraction network of YOLOv4 to simplify the model and improve the feature extraction ability of the model through knowledge distillation. On the other hand, a prediction scale with more detailed receptive field is added to optimize the model structure, which can improve the detection performance for tiny defects. Findings The effectiveness of the method is verified by public data sets NEU-CLS and DAGM 2007, and the steel ingot data set collected in the actual industrial field. The experimental results demonstrated that the proposed YOLOv4-Defect method can greatly improve the recognition efficiency and accuracy and reduce the size and computation consumption of the model. Originality/value This paper proposed an improved YOLOv4 named as YOLOv4-Defect for the detection of surface defect, which is conducive to application in various industrial scenarios with limited storage and computing resources, and meets the requirements of high real-time and precision.


2021 ◽  
Author(s):  
Temirlan Zhekenov ◽  
Artem Nechaev ◽  
Kamilla Chettykbayeva ◽  
Alexey Zinovyev ◽  
German Sardarov ◽  
...  

SUMMARY Researchers base their analysis on basic drilling parameters obtained during mud logging and demonstrate impressive results. However, due to limitations imposed by data quality often present during drilling, those solutions often tend to lose their stability and high levels of predictivity. In this work, the concept of hybrid modeling was introduced which allows to integrate the analytical correlations with algorithms of machine learning for obtaining stable solutions consistent from one data set to another.


2021 ◽  
Vol 11 (6) ◽  
pp. 1592-1598
Author(s):  
Xufei Liu

The early detection of cardiovascular diseases based on electrocardiogram (ECG) is very important for the timely treatment of cardiovascular patients, which increases the survival rate of patients. ECG is a visual representation that describes changes in cardiac bioelectricity and is the basis for detecting heart health. With the rise of edge machine learning and Internet of Things (IoT) technologies, small machine learning models have received attention. This study proposes an ECG automatic classification method based on Internet of Things technology and LSTM network to achieve early monitoring and early prevention of cardiovascular diseases. Specifically, this paper first proposes a single-layer bidirectional LSTM network structure. Make full use of the timing-dependent features of the sampling points before and after to automatically extract features. The network structure is more lightweight and the calculation complexity is lower. In order to verify the effectiveness of the proposed classification model, the relevant comparison algorithm is used to verify on the MIT-BIH public data set. Secondly, the model is embedded in a wearable device to automatically classify the collected ECG. Finally, when an abnormality is detected, the user is alerted by an alarm. The experimental results show that the proposed model has a simple structure and a high classification and recognition rate, which can meet the needs of wearable devices for monitoring ECG of patients.


2019 ◽  
Vol 59 (1) ◽  
pp. 319 ◽  
Author(s):  
Ruizhi Zhong ◽  
Raymond Johnson Jr ◽  
Zhongwei Chen ◽  
Nathaniel Chand

Currently, coal is identified using coring data or log interpretation. Coring is the most dependable methodology, but it is costly and its characterisation is expensive and time consuming. Logging methods are convenient, reliable, and reproducible, but can be subject to statistical and shouldering effects and often have operational difficulties in deviated or horizontal wells. Drilling data, which are routinely available, can potentially be used to identify coal sections in a machine learning environment when conventional wireline logs are not available. To achieve this, a four-layer artificial neural network (ANN) was used to identify coals in a well at Walloon Sub-Group, Surat Basin. The ANN model used drilling data and some logging-while-drilling (LWD) data. The inputs for the lithological model from high-frequency drilling data include weight on bit, rotary speed, torque, and rate of penetration. Inputs from LWD data include gamma ray and hole diameter. The criterion for coal identification is based on bulk density cutoff. The simulation results show that the ANN can deliver an overall accuracy of 96%. Due to the low net-to-gross ratio of coals within the Walloon sequence, a lower but reasonable F1 score of 0.78 is achievable for the coal sections. The proposed model can potentially be implemented in real-time to identify coal intervals without additional logs and aid validation of minimal log data.


2020 ◽  
Vol 10 (14) ◽  
pp. 4959
Author(s):  
Reda Belaiche ◽  
Yu Liu ◽  
Cyrille Migniot ◽  
Dominique Ginhac ◽  
Fan Yang

Micro-Expression (ME) recognition is a hot topic in computer vision as it presents a gateway to capture and understand daily human emotions. It is nonetheless a challenging problem due to ME typically being transient (lasting less than 200 ms) and subtle. Recent advances in machine learning enable new and effective methods to be adopted for solving diverse computer vision tasks. In particular, the use of deep learning techniques on large datasets outperforms classical approaches based on classical machine learning which rely on hand-crafted features. Even though available datasets for spontaneous ME are scarce and much smaller, using off-the-shelf Convolutional Neural Networks (CNNs) still demonstrates satisfactory classification results. However, these networks are intense in terms of memory consumption and computational resources. This poses great challenges when deploying CNN-based solutions in many applications, such as driver monitoring and comprehension recognition in virtual classrooms, which demand fast and accurate recognition. As these networks were initially designed for tasks of different domains, they are over-parameterized and need to be optimized for ME recognition. In this paper, we propose a new network based on the well-known ResNet18 which we optimized for ME classification in two ways. Firstly, we reduced the depth of the network by removing residual layers. Secondly, we introduced a more compact representation of optical flow used as input to the network. We present extensive experiments and demonstrate that the proposed network obtains accuracies comparable to the state-of-the-art methods while significantly reducing the necessary memory space. Our best classification accuracy was 60.17% on the challenging composite dataset containing five objectives classes. Our method takes only 24.6 ms for classifying a ME video clip (less than the occurrence time of the shortest ME which lasts 40 ms). Our CNN design is suitable for real-time embedded applications with limited memory and computing resources.


2011 ◽  
Vol 20 (04) ◽  
pp. 753-781
Author(s):  
KAI CHEN ◽  
KIA MAKKI ◽  
NIKI PISSINOU

In the metropolitan region, most congestion or traffic jams are caused by the uneven distribution of traffic flow that creates bottleneck points where the traffic volume exceeds the road capacity. Additionally, unexpected incidents are the next most probable cause of these bottleneck regions. Moreover, most drivers are driving based on their empirical experience without awareness of real-time traffic situations. This unintelligent traffic behavior can make the congestion problem worse. Prediction based route guidance systems show great improvements in solving the inefficient diversion strategy problem by estimating future travel time when calculating accurate travel time is difficult. However, performances of machine learning based prediction models that are based on the historical data set degrade sharply during a congestion situation. This paper develops a new navigation system for reducing travel time of an individual driver and distributing the flow of urban traffic efficiently in order to reduce the occurrence of congestion. Compared with previous route guidance systems, the results reveal that our system, applying the advanced multi-lane prediction based real-time fastest path (AMPRFP) algorithm, can significantly reduce the travel time especially when drivers travel in a complex route environment and face frequent congestion problems. Unlike the previous system,1 it can be applied either for single lane or multi-lane urban traffic networks where the reason for congestion is significantly complex. We also demonstrate the advantages of this system and verify the results using real highway traffic data and a synthetic experiment.


2021 ◽  
Author(s):  
Fabian Corrêa Cardoso ◽  
Juan Malska ◽  
Paulo Ramiro ◽  
Giancarlo Lucca ◽  
Eduardo N. Borges ◽  
...  

Stock markets are responsible for the movement of huge amounts of financial resources around the world. This market generates a high volume of transaction data, which after being analyzed are very useful for many applications. In this paper we present BovDB, a data set that was built considering as source the Brazilian Stock Exchange (B3) with information related to the years between 1995 and 2020. We have approached the events’ impact on the stocks by applying a cumulative factor to correct prices. The results were compared with public data from InfoMoney and BR Investing, showing that our methods are valid and in accordance with the market standards. BovDB data set can be used as a benchmark for different applications and is publicly available for any researcher on GitHub.


This research discloses how to utilize machine learning methods for anomaly detection in real-time on a computer network. While utilizing machine learning for this task is definitely not a novel idea, little literature is about the matter of doing it in real-time. Most machine learning research in PC network anomaly detection depends on the KDD '99 data set and means to demonstrate the proficiency of the algorithms introduced. The emphasis on this data set has caused a lack of scientific papers disclosing how to assemble network data, remove features, and train algorithms for use inreal-time networks. It has been contended that utilizing the KDD '99 dataset for anomaly detection is not appropriate for real-time network systems. This research proposes how the data gathering procedure will be possible utilizing a dummy network and generating synthetic network traffic by analyzing the importance of One-class SVM. As the efficiency of k-means clustering and LTSM neural networks is lower than one-class SVM, that is why this research uses the results of existing research of LSTM and k-means clustering for the comparison with reported outcomes of a similar algorithm on the KDD '99 dataset. Precisely, without engaging KDD ’99 data set by using synthetic network traffic, this research achieved the higher accuracy as compared to the previous researches.


Geophysics ◽  
2010 ◽  
Vol 75 (4) ◽  
pp. R83-R91 ◽  
Author(s):  
Hassan Masoomzadeh ◽  
Penny J. Barton ◽  
Satish C. Singh

We have developed a pragmatic new processing strategy to enhance seismic information obtained from long-offset multichannel seismic data. The conventional processing approach, which treats data on a sample-by-sample basis, is applied at a coarser scale on groups of samples. Using this approach, a reflected event and its vicinity remain unstretched during the normal moveout correction. Isomoveout curves (lines of equal moveout) in the time-velocity panel are employed to apply a constant moveout correction to selected individual events, leading to a nonstretch stack. A zigzag stacking-velocity function is introduced as a combination of segments of appropriate isomoveout curves. By employing a zigzag velocity function, stretching of key events is avoided and thus information at far offset is preserved in the stack. The method is also computationally cost-effective. However, the zigzag stacking-velocity field must be consistent with target horizons. This method of horizon-consistent nonstretch moveout has been applied to a wide-angle data set from the North Atlantic margin, providing improved images of the basement interface, which was previously poorly imaged.


2018 ◽  
Vol 7 (2.19) ◽  
pp. 31
Author(s):  
K Chokkanathan ◽  
S Koteeswaran

Machine learning algorithms are used immensely for performing most important computational tasks with the help of sample data sets.  Most of the cases Machine learning algorithms will provide best solution where the programming languages failed to produce viable and economically cost-effective results.  Huge volume of deterministic problems are addressed and tackled by using the available sample data sets.  Because of this now a days machine learning concepts are extensively used in computer science and many other fields.  But still we need to explore more to implement machine learning in a specific field such as network analysis, stock trading, spam filters, traffic analysis, real-time and non-real time traffic etc., which may not be available in text books.  Here I would like to discourse some of the key points that the machine learning researchers and practitioners can make use of them.  These include shortcomings and concerns also.  


Sign in / Sign up

Export Citation Format

Share Document