scholarly journals The New Industrial Data Economy

2019 ◽  
Vol 141 (05) ◽  
pp. 38-41
Author(s):  
Tim Lieuwen ◽  
Bobby Noble

Data-driven approaches are increasingly valuable as our ability to store massive amounts of it, the computational power to crunch through it, and the advanced analytics to make sense of it have come to maturity. These opportunities have led to the development of major facilities for aggregating, analyzing, and monetizing data from industrial sources. But the promise of Big Data, machine learning, and data analytics is predicated on access to data. This article delves into four distinct but somewhat overlapping challenges at play in terms of access to data: ownership of data, data nationalism, cybersecurity, and data privacy.

2021 ◽  
Vol 73 (03) ◽  
pp. 25-30
Author(s):  
Srikanta Mishra ◽  
Jared Schuetter ◽  
Akhil Datta-Gupta ◽  
Grant Bromhal

Algorithms are taking over the world, or so we are led to believe, given their growing pervasiveness in multiple fields of human endeavor such as consumer marketing, finance, design and manufacturing, health care, politics, sports, etc. The focus of this article is to examine where things stand in regard to the application of these techniques for managing subsurface energy resources in domains such as conventional and unconventional oil and gas, geologic carbon sequestration, and geothermal energy. It is useful to start with some definitions to establish a common vocabulary. Data analytics (DA)—Sophisticated data collection and analysis to understand and model hidden patterns and relationships in complex, multivariate data sets Machine learning (ML)—Building a model between predictors and response, where an algorithm (often a black box) is used to infer the underlying input/output relationship from the data Artificial intelligence (AI)—Applying a predictive model with new data to make decisions without human intervention (and with the possibility of feedback for model updating) Thus, DA can be thought of as a broad framework that helps determine what happened (descriptive analytics), why it happened (diagnostic analytics), what will happen (predictive analytics), or how can we make something happen (prescriptive analytics) (Sankaran et al. 2019). Although DA is built upon a foundation of classical statistics and optimization, it has increasingly come to rely upon ML, especially for predictive and prescriptive analytics (Donoho 2017). While the terms DA, ML, and AI are often used interchangeably, it is important to recognize that ML is basically a subset of DA and a core enabling element of the broader application for the decision-making construct that is AI. In recent years, there has been a proliferation in studies using ML for predictive analytics in the context of subsurface energy resources. Consider how the number of papers on ML in the OnePetro database has been increasing exponentially since 1990 (Fig. 1). These trends are also reflected in the number of technical sessions devoted to ML/AI topics in conferences organized by SPE, AAPG, and SEG among others; as wells as books targeted to practitioners in these professions (Holdaway 2014; Mishra and Datta-Gupta 2017; Mohaghegh 2017; Misra et al. 2019). Given these high levels of activity, our goal is to provide some observations and recommendations on the practice of data-driven model building using ML techniques. The observations are motivated by our belief that some geoscientists and petroleum engineers may be jumping the gun by applying these techniques in an ad hoc manner without any foundational understanding, whereas others may be holding off on using these methods because they do not have any formal ML training and could benefit from some concrete advice on the subject. The recommendations are conditioned by our experience in applying both conventional statistical modeling and data analytics approaches to practical problems.


Author(s):  
Peter V. Coveney ◽  
Edward R. Dougherty ◽  
Roger R. Highfield

The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their ‘depth’ and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote ‘blind’ big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare. This article is part of the themed issue ‘Multiscale modelling at the physics–chemistry–biology interface’.


2021 ◽  
Author(s):  
Serena Wen ◽  
Yu Sun

In Lewis and Clark High School’s Key Club, meetings are always held in a crowded classroom. The system of eventsign up is inefficient and hinders members from joining events. This has led to students becoming discouraged fromjoining Key Club and often resulted in a lack of volunteers for important events. The club needed a more efficientway of connecting volunteers with volunteering opportunities. To solve this problem, we developed a Volunteer Match Mobile application using Dart and Flutter framework for Key Club to use. The next steps will be toadd a volunteer event recommendation and matching feature, utilizing the results from the research on machine learning models and algorithms in this paper.


2021 ◽  
Vol 73 (09) ◽  
pp. 43-43
Author(s):  
Reza Garmeh

The digital transformation that began several years ago continues to grow and evolve. With new advancements in data analytics and machine-learning algorithms, field developers today see more benefits to upgrading their traditional development work flows to automated artificial-intelligence work flows. The transformation has helped develop more-efficient and truly integrated development approaches. Many development scenarios can be automatically generated, examined, and updated very quickly. These approaches become more valuable when coupled with physics-based integrated asset models that are kept close to actual field performance to reduce uncertainty for reactive decision making. In unconventional basins with enormous completion and production databases, data-driven decisions powered by machine-learning techniques are increasing in popularity to solve field development challenges and optimize cube development. Finding a trend within massive amounts of data requires an augmented artificial intelligence where machine learning and human expertise are coupled. With slowed activity and uncertainty in the oil and gas industry from the COVID-19 pandemic and growing pressure for cleaner energy and environmental regulations, operators had to shift economic modeling for environmental considerations, predicting operational hazards and planning mitigations. This has enlightened the value of field development optimization, shifting from traditional workflow iterations on data assimilation and sequential decision making to deep reinforcement learning algorithms to find the best well placement and well type for the next producer or injector. Operators are trying to adapt with the new environment and enhance their capabilities to efficiently plan, execute, and operate field development plans. Collaboration between different disciplines and integrated analyses are key to the success of optimized development strategies. These selected papers and the suggested additional reading provide a good view of what is evolving with field development work flows using data analytics and machine learning in the era of digital transformation. Recommended additional reading at OnePetro: www.onepetro.org. SPE 203073 - Data-Driven and AI Methods To Enhance Collaborative Well Planning and Drilling-Risk Prediction by Richard Mohan, ADNOC, et al. SPE 200895 - Novel Approach To Enhance the Field Development Planning Process and Reservoir Management To Maximize the Recovery Factor of Gas Condensate Reservoirs Through Integrated Asset Modeling by Oswaldo Espinola Gonzalez, Schlumberger, et al. SPE 202373 - Efficient Optimization and Uncertainty Analysis of Field Development Strategies by Incorporating Economic Decisions in Reservoir Simulation Models by James Browning, Texas Tech University, et al.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Muhammad Babar ◽  
Muhammad Usman Tariq ◽  
Ahmed S. Almasoud ◽  
Mohammad Dahman Alshehri

The present spreading out of big data found the realization of AI and machine learning. With the rise of big data and machine learning, the idea of improving accuracy and enhancing the efficacy of AI applications is also gaining prominence. Machine learning solutions provide improved guard safety in hazardous traffic circumstances in the context of traffic applications. The existing architectures have various challenges, where data privacy is the foremost challenge for vulnerable road users (VRUs). The key reason for failure in traffic control for pedestrians is flawed in the privacy handling of the users. The user data are at risk and are prone to several privacy and security gaps. If an invader succeeds to infiltrate the setup, exposed data can be malevolently influenced, contrived, and misrepresented for illegitimate drives. In this study, an architecture is proposed based on machine learning to analyze and process big data efficiently in a secure environment. The proposed model considers the privacy of users during big data processing. The proposed architecture is a layered framework with a parallel and distributed module using machine learning on big data to achieve secure big data analytics. The proposed architecture designs a distinct unit for privacy management using a machine learning classifier. A stream processing unit is also integrated with the architecture to process the information. The proposed system is apprehended using real-time datasets from various sources and experimentally tested with reliable datasets that disclose the effectiveness of the proposed architecture. The data ingestion results are also highlighted along with training and validation results.


Sign in / Sign up

Export Citation Format

Share Document