Subsurface Analytics Case Study; Reservoir Simulation and Modeling of Highly Complex Offshore Field in Malaysia, Using Artificial Intelligent and Machine Learning

2020 ◽  
Author(s):  
Rahim Masoudi ◽  
Shahab D. Mohaghegh ◽  
Daniel Yingling ◽  
Amir Ansari ◽  
Hadi Amat ◽  
...  
Author(s):  
Nguyen Thi Ngoc Anh ◽  
Nguyen Danh Tu ◽  
Vijender Kumar Solanki ◽  
Nguyen Linh Giang ◽  
Vu Hoai Thu ◽  
...  

Background: In recent years, human resource management is a crucial role in every companies or organization’s operation. Loyalty employee or Churn employee influence the operation of the organization. The impact of Churn employees is difference because of their role in organization. Objective: Thus, we define two Employee Value Models (EVMs) of organizations or companies based on employee features that are popular of almost companies. Methods: Meanwhile, with the development of Artificial intelligent, machine learning is possible to give predict data-based models having high accuracy.Thus, integrating Churn prediction, EVM and machine learning such as support vector machine, logistic regression, random forest is proposed in this paper. The strong points of each model are used and weak points are reduced to help the companies or organizations avoid high value employee leaving in the future. The process of prediction integrating Churn, value of employee and machine learning are described detail in 6 steps. The pros of integrating model gives the more necessary results for company than Churn prediction model but the cons is complexity of model and algorithms and speed of computing. Results: A case study of an organization with 1470 employee positions is carried out to demonstrate the whole integrating churn predict, EVM and machine learning process. The accuracy of the integrating model is high from 82% to 85%. Moreover, the some results of Churn and value employee are analyzed. Conclusion: This paper is proposing upgrade models for predicting an employee who may leave an organization and integration of two models including employee value model and Churn prediction is feasible.


2021 ◽  
Vol 73 (07) ◽  
pp. 44-45
Author(s):  
Chris Carpenter

This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 201693, “Subsurface Analytics Case Study: Reservoir Simulation and Modeling of a Highly Complex Offshore Field in Malaysia Using Artificial Intelligence and Machine Learning,” by Rahim Masoudi, SPE, Petronas; Shahab D. Mohaghegh, SPE, West Virginia University; and Daniel Yingling, Intelligent Solutions, et al., prepared for the 2020 SPE Annual Technical Conference and Exhibition, originally scheduled to be held in Denver, 5–7 October. The paper has not been peer reviewed. Using commercial numerical reservoir simulators to build a full-field reservoir model and simultaneously history matching multiple dynamic variables for a highly complex offshore mature field in Malaysia had proved challenging. In the complete paper, the authors demonstrate how artificial intelligence (AI) and machine learning can be used to build a purely data-driven reservoir simulation model that successfully history matches all dynamic variables for wells in this field and subsequently can be used for production forecasting. This synopsis concentrates on the process used, while the complete paper provides results of the fully automated history matching. Subsurface Analytics In the presented technique, which the authors call subsurface analytics, data-driven pattern-recognition technologies are used to embed the physics of the fluid flow through porous media and to create a model through discovering the best, most-appropriate relationships between all measured data in each reservoir. This is an alternative to starting with the construction of mathematical equations to model the physics of the fluid flow through porous media, followed by modification of geological models in order to achieve history match. The key characteristics of subsurface analytics are that no interpretations, assumptions, or complex initial geological models (and thus no upscaling) exist. Furthermore, the main series of dynamic variables used to build this model is measured on the surface, while other major static, and sometimes even dynamic, characteristics are based on subsurface measurements, thereby making this approach a combination of reservoir and wellbore-simulation models rather than merely a reservoir model. The history-matching process of the subsurface analytics process is completely automated. Top-Down Modeling (TDM) TDM is a data-driven reservoir modeling approach under the realm of subsurface analytics technology that uses AI and machine learning to develop full-field reservoir models based on measurements rather than solutions of governing equations. TDM integrates all available field measurements into a full-field reservoir model and matches the historical production of all individual wells in a mature field with a single AI-based model. The model is validated through blind history matching. The approach then can forecast a field’s behavior on a well-by-well basis. TDM is a data-driven approach; thus, the quality assurance/quality control (QA/QC) of the data input is para-mount before embarking on the modeling process to ensure that the artificial neural network (ANN) is taught properly with reliable training of the data set. This includes the understanding of data availability and magnitude, analysis of well-by-well production performance trends, and identification of data anomalies.


i-com ◽  
2021 ◽  
Vol 20 (1) ◽  
pp. 19-32
Author(s):  
Daniel Buschek ◽  
Charlotte Anlauff ◽  
Florian Lachner

Abstract This paper reflects on a case study of a user-centred concept development process for a Machine Learning (ML) based design tool, conducted at an industry partner. The resulting concept uses ML to match graphical user interface elements in sketches on paper to their digital counterparts to create consistent wireframes. A user study (N=20) with a working prototype shows that this concept is preferred by designers, compared to the previous manual procedure. Reflecting on our process and findings we discuss lessons learned for developing ML tools that respect practitioners’ needs and practices.


2021 ◽  
Vol 11 (13) ◽  
pp. 5826
Author(s):  
Evangelos Axiotis ◽  
Andreas Kontogiannis ◽  
Eleftherios Kalpoutzakis ◽  
George Giannakopoulos

Ethnopharmacology experts face several challenges when identifying and retrieving documents and resources related to their scientific focus. The volume of sources that need to be monitored, the variety of formats utilized, and the different quality of language use across sources present some of what we call “big data” challenges in the analysis of this data. This study aims to understand if and how experts can be supported effectively through intelligent tools in the task of ethnopharmacological literature research. To this end, we utilize a real case study of ethnopharmacology research aimed at the southern Balkans and the coastal zone of Asia Minor. Thus, we propose a methodology for more efficient research in ethnopharmacology. Our work follows an “expert–apprentice” paradigm in an automatic URL extraction process, through crawling, where the apprentice is a machine learning (ML) algorithm, utilizing a combination of active learning (AL) and reinforcement learning (RL), and the expert is the human researcher. ML-powered research improved the effectiveness and efficiency of the domain expert by 3.1 and 5.14 times, respectively, fetching a total number of 420 relevant ethnopharmacological documents in only 7 h versus an estimated 36 h of human-expert effort. Therefore, utilizing artificial intelligence (AI) tools to support the researcher can boost the efficiency and effectiveness of the identification and retrieval of appropriate documents.


Sign in / Sign up

Export Citation Format

Share Document