A Digital Oilfield Comprehensive Study: Automated Intelligent Production Network Optimization

2021 ◽  
Author(s):  
Aulia Ahmad Naufal ◽  
Sabrina Metra

Abstract Production optimization on a network level has been proven to be an effective method to maximize production potential of a field with low capital. But as it stands, it is a heavy process to start along with its several challenges such as data quality issues, tedious plus repetitive work processes to deploy and reuse a complete network model. Leveraging technologies from a flow assurance simulator, python Application Programming Interface (API) toolkit, open-source machine learning packages in python, and a commercial visualization dashboard, this paper proposed a series of workflows to simplify model deployment and set up an automatic advisory system to provide insight as a mean to justify an engineer’s day to-day engineering decision. A total of three steps was prepared to achieve field-level automated optimization system. First, is the creation of digital twin of well and network model. To eliminate potential data errors, reduce time consumed, and to merge various part of the model into one, a scalable python script was made. Second, an automated calibration workflow is created as performance issues also arises for individual branch calibration matching. Hence a combination of technologies was utilized to automate daily data acquisition and model update from production database and run a supervised machine learning model to continuously calibrate the network model. The last one is creating the customizable optimization workflow based of field KPIs, which results are derived from daily optimization run. The results are available in a personalized network surveillance dashboard accessible for engineers to create rapid decisions. From the first and second steps, time consumed was reduced from 30 minutes/well to 10 minutes/well in bulk well modelling workflow and from 2 hours to 10 minutes for the network model merge with the assumption of 100 wells in one network. It would also greatly increase data integrity and consistency issues as it eliminates wearisome input process. On the last step, the model was successfully updated with the latest production data and the well IPRs’ Liquid PI, reservoir pressure, and holdup factor are predicted from ML with more than 90% accuracy. As result delivery, the surveillance dashboard will be populated daily with the network production data, flowing parameters, and operation recommendations. It is estimated more than 90% time is saved from manual individual runs to digital comprehensive optimization.

Algorithms ◽  
2020 ◽  
Vol 13 (4) ◽  
pp. 81 ◽  
Author(s):  
Filippo Giammaria Praticò ◽  
Rosario Fedele ◽  
Vitalii Naumov ◽  
Tomas Sauer

The current methods that aim at monitoring the structural health status (SHS) of road pavements allow detecting surface defects and failures. This notwithstanding, there is a lack of methods and systems that are able to identify concealed cracks (particularly, bottom-up cracks) and monitor their growth over time. For this reason, the objective of this study is to set up a supervised machine learning (ML)-based method for the identification and classification of the SHS of a differently cracked road pavement based on its vibro-acoustic signature. The method aims at collecting these signatures (using acoustic-sensors, located at the roadside) and classifying the pavement’s SHS through ML models. Different ML classifiers (i.e., multilayer perceptron, MLP, convolutional neural network, CNN, random forest classifier, RFC, and support vector classifier, SVC) were used and compared. Results show the possibility of associating with great accuracy (i.e., MLP = 91.8%, CNN = 95.6%, RFC = 91.0%, and SVC = 99.1%) a specific vibro-acoustic signature to a differently cracked road pavement. These results are encouraging and represent the bases for the application of the proposed method in real contexts, such as monitoring roads and bridges using wireless sensor networks, which is the target of future studies.


Author(s):  
Thu T. Nguyen ◽  
Shaniece Criss ◽  
Pallavi Dwivedi ◽  
Dina Huang ◽  
Jessica Keralis ◽  
...  

Background: Anecdotal reports suggest a rise in anti-Asian racial attitudes and discrimination in response to COVID-19. Racism can have significant social, economic, and health impacts, but there has been little systematic investigation of increases in anti-Asian prejudice. Methods: We utilized Twitter’s Streaming Application Programming Interface (API) to collect 3,377,295 U.S. race-related tweets from November 2019–June 2020. Sentiment analysis was performed using support vector machine (SVM), a supervised machine learning model. Accuracy for identifying negative sentiments, comparing the machine learning model to manually labeled tweets was 91%. We investigated changes in racial sentiment before and following the emergence of COVID-19. Results: The proportion of negative tweets referencing Asians increased by 68.4% (from 9.79% in November to 16.49% in March). In contrast, the proportion of negative tweets referencing other racial/ethnic minorities (Blacks and Latinx) remained relatively stable during this time period, declining less than 1% for tweets referencing Blacks and increasing by 2% for tweets referencing Latinx. Common themes that emerged during the content analysis of a random subsample of 3300 tweets included: racism and blame (20%), anti-racism (20%), and daily life impact (27%). Conclusion: Social media data can be used to provide timely information to investigate shifts in area-level racial sentiment.


Author(s):  
Matteo Rucco ◽  
Franca Giannini ◽  
Katia Lupinetti ◽  
Marina Monti

AbstractIn this paper, we report on a data analysis process for the automated classification of mechanical components. In particular, here, we describe, how to implement a machine learning system for the automated classification of parts belonging to several sub-categories. We collect models that are typically used in the mechanical industry, and then we represent each object by a collection of features. We illustrate how to set-up a supervised multi-layer artificial neural network with an ad-hoc classification schema. We test our solution on a dataset formed by 2354 elements described by 875 features and spanned among 15 sub-categories. We state the accuracy of classification in terms of average area under ROC curves and the ability to classify 606 unknown 3D objects by similarity coefficients. Our parts’ classification system outperforms a classifier based on the Light Field Descriptor, which, as far as we know, actually represents the gold standard for the identification of most types of 3D mechanical objects.


Mathematics ◽  
2020 ◽  
Vol 8 (6) ◽  
pp. 900
Author(s):  
Sašo Karakatič

The quality of machine learning models can suffer when inappropriate data is used, which is especially prevalent in high-dimensional and imbalanced data sets. Data preparation and preprocessing can mitigate some problems and can thus result in better models. The use of meta-heuristic and nature-inspired methods for data preprocessing has become common, but these approaches are still not readily available to practitioners with a simple and extendable application programming interface (API). In this paper the EvoPreprocess open-source Python framework, that preprocesses data with the use of evolutionary and nature-inspired optimization algorithms, is presented. The main problems addressed by the framework are data sampling (simultaneous over- and under-sampling data instances), feature selection and data weighting for supervised machine learning problems. EvoPreprocess framework provides a simple object-oriented and parallelized API of the preprocessing tasks and can be used with scikit-learn and imbalanced-learn Python machine learning libraries. The framework uses self-adaptive well-known nature-inspired meta-heuristic algorithms and can easily be extended with custom optimization and evaluation strategies. The paper presents the architecture of the framework, its use, experiment results and comparison to other common preprocessing approaches.


2020 ◽  
Vol 14 (2) ◽  
pp. 140-159
Author(s):  
Anthony-Paul Cooper ◽  
Emmanuel Awuni Kolog ◽  
Erkki Sutinen

This article builds on previous research around the exploration of the content of church-related tweets. It does so by exploring whether the qualitative thematic coding of such tweets can, in part, be automated by the use of machine learning. It compares three supervised machine learning algorithms to understand how useful each algorithm is at a classification task, based on a dataset of human-coded church-related tweets. The study finds that one such algorithm, Naïve-Bayes, performs better than the other algorithms considered, returning Precision, Recall and F-measure values which each exceed an acceptable threshold of 70%. This has far-reaching consequences at a time where the high volume of social media data, in this case, Twitter data, means that the resource-intensity of manual coding approaches can act as a barrier to understanding how the online community interacts with, and talks about, church. The findings presented in this article offer a way forward for scholars of digital theology to better understand the content of online church discourse.


2017 ◽  
Author(s):  
Sabrina Jaeger ◽  
Simone Fulle ◽  
Samo Turk

Inspired by natural language processing techniques we here introduce Mol2vec which is an unsupervised machine learning approach to learn vector representations of molecular substructures. Similarly, to the Word2vec models where vectors of closely related words are in close proximity in the vector space, Mol2vec learns vector representations of molecular substructures that are pointing in similar directions for chemically related substructures. Compounds can finally be encoded as vectors by summing up vectors of the individual substructures and, for instance, feed into supervised machine learning approaches to predict compound properties. The underlying substructure vector embeddings are obtained by training an unsupervised machine learning approach on a so-called corpus of compounds that consists of all available chemical matter. The resulting Mol2vec model is pre-trained once, yields dense vector representations and overcomes drawbacks of common compound feature representations such as sparseness and bit collisions. The prediction capabilities are demonstrated on several compound property and bioactivity data sets and compared with results obtained for Morgan fingerprints as reference compound representation. Mol2vec can be easily combined with ProtVec, which employs the same Word2vec concept on protein sequences, resulting in a proteochemometric approach that is alignment independent and can be thus also easily used for proteins with low sequence similarities.


Sign in / Sign up

Export Citation Format

Share Document