scholarly journals Server-Side Log Data Analytics for I/O Workload Characterization and Coordination on Large Shared Storage Systems

Author(s):  
Yang Liu ◽  
Raghul Gunasekaran ◽  
Xiaosong Ma ◽  
Sudharshan S. Vazhkudai
2021 ◽  
Author(s):  
Florian Skopik ◽  
Markus Wurzenberger ◽  
Max Landauer
Keyword(s):  

2019 ◽  
Vol 59 (2) ◽  
pp. 874
Author(s):  
Irina Emelyanova ◽  
Chris Dyt ◽  
M. Ben Clennell ◽  
Jean-Baptiste Peyaud ◽  
Marina Pervukhina

Wireline log datasets complemented with core measurements and expert interpretation are vital for accurate reservoir characterisation. In many cases, effective use of this information for predicting rock properties requires application of advanced data analytics (DA) techniques. We developed non-linear prediction models by combining data- and knowledge-driven methods. These models were used for predicting total organic carbon and electro-facies from basic wireline logs. Four DA approaches were utilised: unsupervised, supervised, semi-supervised and expert rule based. The unsupervised approach implements ensemble clustering for detecting variations in sedimentary sequences of the subsurface. The supervised approach predicts rock properties from well logs by applying ensemble learning that requires core data measurements. The semi-supervised approach builds a decision tree for iterative clustering of well logs to locate a specific facies and uses criteria determined by a petrophysicist for making decisions at each tree node whether to continue or stop the partitioning. The expert rule based approach combines clustering techniques at individual wells with an expert’s methodology of interpreting facies to determine field-wide rock characterisation. Here we overview the developed models and their applications to log data from offshore and onshore Australian wells. We discuss the deep thinking–shallow learning versus shallow thinking–deep learning approaches in reservoir modelling and highlight the importance of close collaboration of data analysts with domain experts.


Author(s):  
Pethuru Raj ◽  
Pushpa J.

Data is the new fuel for any system to deliver smart and sophisticated services. Data is being touted as the strategic asset for any organization to plan ahead and provide next-generation capabilities with all the clarity and confidence. Whether data is internally sourced or aggregated from different and distributed source, it is essential for all kinds of data to be continuously and consciously collected, transmitted, cleansed, and hosted on storage systems. There are several types of analytical methods and machines to do deeper and decisive analytics on those curated and consolidated data to extract actionable insights in real-time. Precise and concise analytics guarantee perfect decision-making and action. We need competent and highly integrated analytics platform for speeding up, simplifying and streamlining data analytics, which is becoming a hard nut to crack due to the multi-structured and massive quantities of data. On the infrastructure front, we need highly optimized compute, storage and network infrastructure for achieving data analytics with ease. Another noteworthy point is that there are batch, real-time, and interactive processing of data. Most of the personal and professional applications need real-time insights in order to produce real-time applications. That is, real-time capture, processing, and decision-making are being insisted and hence the edge or fog computing concept has become very popular. This chapter is exclusively designed in order to tell all on how to accomplish real-time analytics on fog devices data.


Sign in / Sign up

Export Citation Format

Share Document