Well log data analytics: overview of applications to improve subsurface characterisation

2019 ◽  
Vol 59 (2) ◽  
pp. 874
Author(s):  
Irina Emelyanova ◽  
Chris Dyt ◽  
M. Ben Clennell ◽  
Jean-Baptiste Peyaud ◽  
Marina Pervukhina

Wireline log datasets complemented with core measurements and expert interpretation are vital for accurate reservoir characterisation. In many cases, effective use of this information for predicting rock properties requires application of advanced data analytics (DA) techniques. We developed non-linear prediction models by combining data- and knowledge-driven methods. These models were used for predicting total organic carbon and electro-facies from basic wireline logs. Four DA approaches were utilised: unsupervised, supervised, semi-supervised and expert rule based. The unsupervised approach implements ensemble clustering for detecting variations in sedimentary sequences of the subsurface. The supervised approach predicts rock properties from well logs by applying ensemble learning that requires core data measurements. The semi-supervised approach builds a decision tree for iterative clustering of well logs to locate a specific facies and uses criteria determined by a petrophysicist for making decisions at each tree node whether to continue or stop the partitioning. The expert rule based approach combines clustering techniques at individual wells with an expert’s methodology of interpreting facies to determine field-wide rock characterisation. Here we overview the developed models and their applications to log data from offshore and onshore Australian wells. We discuss the deep thinking–shallow learning versus shallow thinking–deep learning approaches in reservoir modelling and highlight the importance of close collaboration of data analysts with domain experts.

Author(s):  
Yunpeng Li ◽  
Utpal Roy ◽  
Y. Tina Lee ◽  
Sudarsan Rachuri

Rule-based expert systems such as CLIPS (C Language Integrated Production System) are 1) based on inductive (if-then) rules to elicit domain knowledge and 2) designed to reason new knowledge based on existing knowledge and given inputs. Recently, data mining techniques have been advocated for discovering knowledge from massive historical or real-time sensor data. Combining top-down expert-driven rule models with bottom-up data-driven prediction models facilitates enrichment and improvement of the predefined knowledge in an expert system with data-driven insights. However, combining is possible only if there is a common and formal representation of these models so that they are capable of being exchanged, reused, and orchestrated among different authoring tools. This paper investigates the open standard PMML (Predictive Model Mockup Language) in integrating rule-based expert systems with data analytics tools, so that a decision maker would have access to powerful tools in dealing with both reasoning-intensive tasks and data-intensive tasks. We present a process planning use case in the manufacturing domain, which is originally implemented as a CLIPS-based expert system. Different paradigms in interpreting expert system facts and rules as PMML models (and vice versa), as well as challenges in representing and composing these models, have been explored. They will be discussed in detail.


Author(s):  
Ahmad Muraji Suranto ◽  
Aris Buntoro ◽  
Carolus Prasetyadi ◽  
Ricky Adi Wibowo

In modeling the hydraulic fracking program for unconventional reservoir shales, information about elasticity rock properties is needed, namely Young's Modulus and Poisson's ratio as the basis for determining the formation depth interval with high brittleness. The elastic rock properties (Young's Modulus and Poisson's ratio) are a geomechanical parameters used to identify rock brittleness using core data (static data) and well log data (dynamic data). A common problem is that the core data is not available as the most reliable data, so well log data is used. The principle of measuring elastic rock properties in the rock mechanics lab is very different from measurements with well logs, where measurements in the lab are in high stresses / strains, low strain rates, and usually drained, while measurements in well logging use the principle of measured downhole by high frequency sonic. vibrations in conditions of very low stresses / strains, High strain rate, and Always undrained. For this reason, it is necessary to convert dynamic to static elastic rock properties (Poisson's ratio and Young's modulus) using empirical equations. The conversion of elastic rock properties (well logs) from dynamic to static using the empirical calculation method shows a significant shift in the value of Young's Modulus and Poisson's ratio, namely a shift from the ductile zone dominance to the dominant brittle zone. The conversion results were validated with the rock mechanical test results from the analog outcrop cores (static) showing that the results were sufficiently correlated based on the distribution range.


2019 ◽  
Author(s):  
Oskar Flygare ◽  
Jesper Enander ◽  
Erik Andersson ◽  
Brjánn Ljótsson ◽  
Volen Z Ivanov ◽  
...  

**Background:** Previous attempts to identify predictors of treatment outcomes in body dysmorphic disorder (BDD) have yielded inconsistent findings. One way to increase precision and clinical utility could be to use machine learning methods, which can incorporate multiple non-linear associations in prediction models. **Methods:** This study used a random forests machine learning approach to test if it is possible to reliably predict remission from BDD in a sample of 88 individuals that had received internet-delivered cognitive behavioral therapy for BDD. The random forest models were compared to traditional logistic regression analyses. **Results:** Random forests correctly identified 78% of participants as remitters or non-remitters at post-treatment. The accuracy of prediction was lower in subsequent follow-ups (68%, 66% and 61% correctly classified at 3-, 12- and 24-month follow-ups, respectively). Depressive symptoms, treatment credibility, working alliance, and initial severity of BDD were among the most important predictors at the beginning of treatment. By contrast, the logistic regression models did not identify consistent and strong predictors of remission from BDD. **Conclusions:** The results provide initial support for the clinical utility of machine learning approaches in the prediction of outcomes of patients with BDD. **Trial registration:** ClinicalTrials.gov ID: NCT02010619.


2019 ◽  
Vol 41 (2) ◽  
pp. 284-287
Author(s):  
Pedro Guilherme Coelho Hannun ◽  
Luis Gustavo Modelli de Andrade

Abstract Introduction: The prediction of post transplantation outcomes is clinically important and involves several problems. The current prediction models based on standard statistics are very complex, difficult to validate and do not provide accurate prediction. Machine learning, a statistical technique that allows the computer to make future predictions using previous experiences, is beginning to be used in order to solve these issues. In the field of kidney transplantation, computational forecasting use has been reported in prediction of chronic allograft rejection, delayed graft function, and graft survival. This paper describes machine learning principles and steps to make a prediction and performs a brief analysis of the most recent applications of its application in literature. Discussion: There is compelling evidence that machine learning approaches based on donor and recipient data are better in providing improved prognosis of graft outcomes than traditional analysis. The immediate expectations that emerge from this new prediction modelling technique are that it will generate better clinical decisions based on dynamic and local practice data and optimize organ allocation as well as post transplantation care management. Despite the promising results, there is no substantial number of studies yet to determine feasibility of its application in a clinical setting. Conclusion: The way we deal with storage data in electronic health records will radically change in the coming years and machine learning will be part of clinical daily routine, whether to predict clinical outcomes or suggest diagnosis based on institutional experience.


2021 ◽  
Vol 10 (4) ◽  
pp. 199
Author(s):  
Francisco M. Bellas Aláez ◽  
Jesus M. Torres Palenzuela ◽  
Evangelos Spyrakos ◽  
Luis González Vilas

This work presents new prediction models based on recent developments in machine learning methods, such as Random Forest (RF) and AdaBoost, and compares them with more classical approaches, i.e., support vector machines (SVMs) and neural networks (NNs). The models predict Pseudo-nitzschia spp. blooms in the Galician Rias Baixas. This work builds on a previous study by the authors (doi.org/10.1016/j.pocean.2014.03.003) but uses an extended database (from 2002 to 2012) and new algorithms. Our results show that RF and AdaBoost provide better prediction results compared to SVMs and NNs, as they show improved performance metrics and a better balance between sensitivity and specificity. Classical machine learning approaches show higher sensitivities, but at a cost of lower specificity and higher percentages of false alarms (lower precision). These results seem to indicate a greater adaptation of new algorithms (RF and AdaBoost) to unbalanced datasets. Our models could be operationally implemented to establish a short-term prediction system.


1996 ◽  
Vol 29 (1) ◽  
pp. 5090-5095
Author(s):  
Vikram Krishnamurthy ◽  
H. Vincent Poor

2021 ◽  
Author(s):  
Yair Gordin ◽  
Thomas Bradley ◽  
Yoav O. Rosenberg ◽  
Anat Canning ◽  
Yossef H. Hatzor ◽  
...  

Abstract The mechanical and petrophysical behavior of organic-rich carbonates (ORC) is affected significantly by burial diagenesis and the thermal maturation of their organic matter. Therefore, establishing Rock Physics (RP) relations and appropriate models can be valuable in delineating the spatial distribution of key rock properties such as the total organic carbon (TOC), porosity, water saturation, and thermal maturity in the petroleum system. These key rock properties are of most importance to evaluate during hydrocarbon exploration and production operations when establishing a detailed subsurface model is critical. High-resolution reservoir models are typically based on the inversion of seismic data to calculate the seismic layer properties such as P- and S-wave impedances (or velocities), density, Poisson's ratio, Vp/Vs ratio, etc. If velocity anisotropy data are also available, then another layer of data can be used as input for the subsurface model leading to a better understanding of the geological section. The challenge is to establish reliable geostatistical relations between these seismic layer measurements and petrophysical/geomechanical properties using well logs and laboratory measurements. In this study, we developed RP models to predict the organic richness (TOC of 1-15 wt%), porosity (7-35 %), water saturation, and thermal maturity (Tmax of 420-435⁰C) of the organic-rich carbonate sections using well logs and laboratory core measurements derived from the Ness 5 well drilled in the Golan Basin (950-1350 m). The RP models are based primarily on the modified lower Hashin-Shtrikman bounds (MLHS) and Gassmann's fluid substitution equations. These organic-rich carbonate sections are unique in their relatively low burial diagenetic stage characterized by a wide range of porosity which decreases with depth, and thermal maturation which increases with depth (from immature up to the oil window). As confirmation of the method, the levels of organic content and maturity were confirmed using Rock-Eval pyrolysis data. Following the RP analysis, horizontal (HTI) and vertical (VTI) S-wave velocity anisotropy were analyzed using cross-dipole shear well logs (based on Stoneley waves response). It was found that anisotropy, in addition to the RP analysis, can assist in delineating the organic-rich sections, microfractures, and changes in gas saturation due to thermal maturation. Specifically, increasing thermal maturation enhances VTI and azimuthal HTI S-wave velocity anisotropies, in the ductile and brittle sections, respectively. The observed relationships are quite robust based on the high-quality laboratory and log data. However, our conclusions may be limited to the early stages of maturation and burial diagenesis, as at higher maturation and diagenesis the changes in physical properties can vary significantly.


2021 ◽  
Author(s):  
Tao Lin ◽  
Mokhles Mezghani ◽  
Chicheng Xu ◽  
Weichang Li

Abstract Reservoir characterization requires accurate prediction of multiple petrophysical properties such as bulk density (or acoustic impedance), porosity, and permeability. However, it remains a big challenge in heterogeneous reservoirs due to significant diagenetic impacts including dissolution, dolomitization, cementation, and fracturing. Most well logs lack the resolution to obtain rock properties in detail in a heterogenous formation. Therefore, it is pertinent to integrate core images into the prediction workflow. This study presents a new approach to solve the problem of obtaining the high-resolution multiple petrophysical properties, by combining machine learning (ML) algorithms and computer vision (CV) techniques. The methodology can be used to automate the process of core data analysis with a minimum number of plugs, thus reducing human effort and cost and improving accuracy. The workflow consists of conditioning and extracting features from core images, correlating well logs and core analysis with those features to build ML models, and applying the models on new cores for petrophysical properties predictions. The core images are preprocessed and analyzed using color models and texture recognition, to extract image characteristics and core textures. The image features are then aggregated into a profile in depth, resampled and aligned with well logs and core analysis. The ML regression models, including classification and regression trees (CART) and deep neural network (DNN), are trained and validated from the filtered training samples of relevant features and target petrophysical properties. The models are then tested on a blind test dataset to evaluate the prediction performance, to predict target petrophysical properties of grain density, porosity and permeability. The profile of histograms of each target property are computed to analyze the data distribution. The feature vectors are extracted from CV analysis of core images and gamma ray logs. The importance of each feature is generated by CART model to individual target, which may be used to reduce model complexity of future model building. The model performances are evaluated and compared on each target. We achieved reasonably good correlation and accuracy on the models, for example, porosity R2=49.7% and RMSE=2.4 p.u., and logarithmic permeability R2=57.8% and RMSE=0.53. The field case demonstrates that inclusion of core image attributes can improve petrophysical regression in heterogenous reservoirs. It can be extended to a multi-well setting to generate vertical distribution of petrophysical properties which can be integrated into reservoir modeling and characterization. Machine leaning algorithms can help automate the workflow and be flexible to be adjusted to take various inputs for prediction.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Lei Li ◽  
Desheng Wu

PurposeThe infraction of securities regulations (ISRs) of listed firms in their day-to-day operations and management has become one of common problems. This paper proposed several machine learning approaches to forecast the risk at infractions of listed corporates to solve financial problems that are not effective and precise in supervision.Design/methodology/approachThe overall proposed research framework designed for forecasting the infractions (ISRs) include data collection and cleaning, feature engineering, data split, prediction approach application and model performance evaluation. We select Logistic Regression, Naïve Bayes, Random Forest, Support Vector Machines, Artificial Neural Network and Long Short-Term Memory Networks (LSTMs) as ISRs prediction models.FindingsThe research results show that prediction performance of proposed models with the prior infractions provides a significant improvement of the ISRs than those without prior, especially for large sample set. The results also indicate when judging whether a company has infractions, we should pay attention to novel artificial intelligence methods, previous infractions of the company, and large data sets.Originality/valueThe findings could be utilized to address the problems of identifying listed corporates' ISRs at hand to a certain degree. Overall, results elucidate the value of the prior infraction of securities regulations (ISRs). This shows the importance of including more data sources when constructing distress models and not only focus on building increasingly more complex models on the same data. This is also beneficial to the regulatory authorities.


Sign in / Sign up

Export Citation Format

Share Document