Application of Monte Carlo Analysis and Self-Organizing Maps to De-Risk Compressor Re-Wheeling

2021 ◽  
Author(s):  
Greg Michael Nelson ◽  
Robert Barrie

Abstract Objectives / Scope Re-wheeling compressors to match late-life field conditions gives significant benefits in operational efficiency and carbon reduction. But changing the compressor wheels and increasing shaft speeds also introduces a risk in terms of the rotor-dynamic stability of the system. API assessments use deterministic methods to assess the design change, but give less information in terms of the key risks and how to control them. This paper outlines new methods for assessing rotor dynamic risks to compressors during re-wheeling and their value over traditional methods. Methods New methods were developed to extend beyond the API requirements in order to assess and manage the rotor-dynamic risk as part of a peer review process of re-wheeling a compressor train. A combination of sensitivity studies on key parameters and Self Organizing Maps (SOMs - a machine learning technique) was used to identify the factors which present the greatest risk to the re-wheeling, and a Monte Carlo analysis was used to identify the change in risk of rotor-dynamic problems when compared with the existing machine. Results The Monte Carlo analysis used random distributions of factors on key input parameters, and the same factors were applied to the existing and re-wheeled designs. It identified that although the re-wheeled design was nominally more stable than the existing design according to the API analysis, it actually presented a greater risk of instability. This is because the distribution of resulting stability values had a higher mean but a greater spread than the existing machine when subject to uncertainty in input parameters. Since the existing machine is free from dynamics problems, the parameter combinations which resulted in an unstable existing machine could be discounted, but the resulting subset of factors when applied to the re-wheeled design still gave some unstable cases. Therefore, the fact that the existing machine is free from dynamics problems does not in itself discount the possibility of problems following the re-wheel. SOMs were used to identify the components which posed the greatest risk to the re-wheeled design. This highlighted that low stiffness in two particular bearings along the high speed shaft would pose the greatest risk to shaft stability, meaning that close attention can be paid by the operators and OEMs to this to manage the risks as the re-wheel progresses. Novel Information This work shows that probabilistic and machine learning techniques have significant value in managing risks during compressor re-wheeling, highlighting risks which would not be identified using standard deterministic methods and focusing attention on the aspects which are most important to manage them.

Energies ◽  
2020 ◽  
Vol 13 (18) ◽  
pp. 4862
Author(s):  
Nilesh Dixit ◽  
Paul McColgan ◽  
Kimberly Kusler

A good understanding of different rock types and their distribution is critical to locate oil and gas accumulations in the subsurface. Traditionally, rock core samples are used to directly determine the exact rock facies and what geological environments might be present. Core samples are often expensive to recover and, therefore, not always available for each well. Wireline logs provide a cheaper alternative to core samples, but they do not distinguish between various rock facies alone. This problem can be overcome by integrating limited core data with largely available wireline log data with machine learning. Here, we presented an application of machine learning in rock facies predictions based on limited core data from the Umiat Oil Field of Alaska. First, we identified five sandstone reservoir facies within the Lower Grandstand Member using core samples and mineralogical data available for the Umiat 18 well. Next, we applied machine learning algorithms (ascendant hierarchical clustering, self-organizing maps, artificial neural network, and multi-resolution graph-based clustering) to available wireline log data to build our models trained with core-driven information. We found that self-organizing maps provided the best result among other techniques for facies predictions. We used the best self-organizing maps scheme for predicting similar reservoir facies in nearby uncored wells—Umiat 23H and SeaBee-1. We validated our facies prediction results for these wells with observed seismic data.


2021 ◽  
Author(s):  
Bohan Zheng

With Internet of Things (IoT) being prevalently adopted in recent years, traditional machine learning and data mining methods can hardly be competent to deal with the complex big data problems if applied alone. However, hybridizing those who have complementary advantages could achieve optimized practical solutions. This work discusses how to solve multivariate regression problems and extract intrinsic knowledge by hybridizing Self-Organizing Maps (SOM) and Regression Trees. A dual-layer SOM map is developed in which the first layer accomplishes unsupervised learning and then regression tree layer performs supervised learning in the second layer to get predictions and extract knowledge. In this framework, SOM neurons serve as kernels with similar training samples mapped so that regression tree could achieve regression locally. In this way, the difficulties of applying and visualizing local regression on high dimensional data are overcome. Further, we provide an automated growing mechanism based on a few stop criteria without adding new parameters. A case study of solving Electrical Vehicle (EV) range anxiety problem is presented and it demonstrates that our proposed hybrid model is quantitatively precise and interpretive. key words: Multivariate Regression, Big Data, Machine Learning, Data Mining, Self-Organizing Maps (SOM), Regression Tree, Electrical Vehicle (EV), Range Estimation, Internet of Things (IoT)


2015 ◽  
Vol Volume 20 - 2015 - Special... ◽  
Author(s):  
Christian Chan Shio ◽  
Francine Diener

International audience We consider the problem of estimating the coefficients in a system of differential equations when a trajectory of the system is known at a set of times. To do this, we use a simple Monte Carlo sampling method, known as the rejection sampling algorithm. Unlike deterministic methods, it does not provide a point estimate of the coefficients directly, but rather a collection of values that "fits" the known data well. An examination of the properties of the method allows us not only to better understand how to choose the different parameters when implementing the method, but also to introduce a more efficient method by using a new two-step approach which we call sequential rejection sampling. Several examples are presented to illustrate the performance of both the original and the new methods. On considère le problème d'estimer les coefficients d'un système d'équations différentielles quand une trajectoire du système est connue en un petit nombre d'instants. On utilise pour cela une méthode de Monte Carlo très simple, la méthode de rejet qui ne fournit pas directement une estimation ponctuelle des coefficients comme le font les méthodes déterministes mais plutôt un ensemble de valeurs de ces coefficients qui sont cohérentes avec les données. L'examen des propriétés de cette méthode permet de comprendre non seulement comment bien choisir les différents paramètres de la méthode lorsqu'on l'utilise mais aussi d'introduire une méthode plus efficace, en deux étapes, que nous appelons la méthode de rejet séquentielle. Plusieurs exemples illustrent les performances respectives de la méthode d'origine et de la nouvelle méthode.


2007 ◽  
Vol 4 (6) ◽  
pp. 3953-3978 ◽  
Author(s):  
M. Herbst ◽  
M. C. Casper

Abstract. The reduction of information contained in model time series through the use of aggregating statistical measures is very high compared to the amount of information that one would like to draw from it for model identification and calibration purposes. Applied within a model identification context, aggregating statistical performance measures are inadequate to capture details on time series characteristics. It has been readily shown that this loss of information on the residuals imposes important limitations on model identification and -diagnostics and thus constitutes an element of the overall model uncertainty. In this contribution we present an approach using a Self-Organizing Map (SOM) to circumvent the identifiability problem induced by the low discriminatory power of aggregating performance measures. Instead, a Self-Organizing Map is used to differentiate the spectrum of model realizations, obtained from Monte-Carlo simulations with a distributed conceptual watershed model, based on the recognition of different patterns in time series. Further, the SOM is used instead of a classical optimization algorithm to identify the model realizations among the Monte-Carlo simulations that most closely approximate the pattern of the measured discharge time series. The results are analyzed and compared with the manually calibrated model as well as with the results of the Shuffled Complex Evolution algorithm (SCE-UA).


Author(s):  
A. Ndiaye ◽  
M. Bauerheim ◽  
S. Moreau ◽  
F. Nicoud

Combustion instabilities can develop in modern gas-turbines as large amplitude pressure oscillations coupled with heat release fluctuations. In extreme cases, they lead to irreversible damage which can destroy the combustor. Prediction and control of all acoustic modes of the configuration at the design stage are therefore required to avoid these instabilities. This is a challenging task because of the large number of parameters involved. This situation becomes even more complex when considering uncertainties of the underlying models and input parameters. The forward uncertainty quantification problem is addressed in the case of a single swirled burner combustor. First, a Helmholtz solver is used to analyze the thermoacoustic modes of the combustion chamber. The Flame Transfer Function measured experimentally is used as a flame model for the Helmholtz solver. Then, the frequency of oscillation and the growth rate of the first thermoacoustic mode are computed in 24 different operating points. Comparisons between experimental and numerical results show good agreements except for modes which are marginally stable/unstable. The main reason is that the uncertainties can arbitrary change the nature of these modes (stable vs unstable); in other words, the usual mode classification stable/unstable must be replaced by a more continuous description such as the risk factor, i.e. the probability for a mode to be unstable given the uncertainties on the input parameters. To do so, a Monte Carlo analysis is performed using 4000 Helmholtz simulations of a single experimental operating point but with random perturbations on the FTF parameters. This allows the computation of the risk factor associated to this acoustic mode. Finally, the analysis of the Monte Carlo database suggests that a reduced two-step UQ strategy may be efficient to deal with thermoacoustics in such a system. First, two bilinear surrogate models are tuned from a moderate number of Helmholtz solutions (a few tens). Then, these algebraic models are used to perform a Monte Carlo analysis at reduced cost and approximate the risk factor of the mode. The accuracy and efficiency of this reduced UQ strategy are assessed by comparing the reference risk factor given by the full Monte Carlo database and the approximate risk factor obtained by the surrogate models. It shows a good agreement which proves that reduced efficient methods can be used to predict unstable modes.


2019 ◽  
Vol 7 (3) ◽  
pp. SG23-SG42 ◽  
Author(s):  
Thang N. Ha ◽  
Kurt J. Marfurt ◽  
Bradley C. Wallet ◽  
Bryce Hutchinson

Recent developments in attribute analysis and machine learning have significantly enhanced interpretation workflows of 3D seismic surveys. Nevertheless, even in 2018, many sedimentary basins are only covered by grids of 2D seismic lines. These 2D surveys are suitable for regional feature mapping and often identify targets in areas not covered by 3D surveys. With continuing pressure to cut costs in the hydrocarbon industry, it is crucial to extract as much information as possible from these 2D surveys. Unfortunately, much if not most modern interpretation software packages are designed to work exclusively with 3D data. To determine if we can apply 3D volumetric interpretation workflows to grids of 2D seismic lines, we have applied data conditioning, attribute analysis, and a machine-learning technique called self-organizing maps to the 2D data acquired over the Exmouth Plateau, North Carnarvon Basin, Australia. We find that these workflows allow us to significantly improve image quality, interpret regional geologic features, identify local anomalies, and perform seismic facies analysis. However, these workflows are not without pitfalls. We need to be careful in choosing the order of filters in the data conditioning workflow and be aware of reflector misties at line intersections. Vector data, such as reflector convergence, need to be extracted and then mapped component-by-component before combining the results. We are also unable to perform attribute extraction along a surface or geobody extraction for 2D data in our commercial interpretation software package. To address this issue, we devise a point-by-point attribute extraction workaround to overcome the incompatibility between 3D interpretation workflow and 2D data.


2008 ◽  
Vol 12 (2) ◽  
pp. 657-667 ◽  
Author(s):  
M. Herbst ◽  
M. C. Casper

Abstract. The reduction of information contained in model time series through the use of aggregating statistical performance measures is very high compared to the amount of information that one would like to draw from it for model identification and calibration purposes. It has been readily shown that this loss imposes important limitations on model identification and -diagnostics and thus constitutes an element of the overall model uncertainty. In this contribution we present an approach using a Self-Organizing Map (SOM) to circumvent the identifiability problem induced by the low discriminatory power of aggregating performance measures. Instead, a Self-Organizing Map is used to differentiate the spectrum of model realizations, obtained from Monte-Carlo simulations with a distributed conceptual watershed model, based on the recognition of different patterns in time series. Further, the SOM is used instead of a classical optimization algorithm to identify those model realizations among the Monte-Carlo simulation results that most closely approximate the pattern of the measured discharge time series. The results are analyzed and compared with the manually calibrated model as well as with the results of the Shuffled Complex Evolution algorithm (SCE-UA). In our study the latter slightly outperformed the SOM results. The SOM method, however, yields a set of equivalent model parameterizations and therefore also allows for confining the parameter space to a region that closely represents a measured data set. This particular feature renders the SOM potentially useful for future model identification applications.


2020 ◽  
Vol 79 (23-24) ◽  
pp. 16299-16317 ◽  
Author(s):  
Rashmika Nawaratne ◽  
Achini Adikari ◽  
Damminda Alahakoon ◽  
Daswin De Silva ◽  
Naveen Chilamkurti

2021 ◽  
Author(s):  
Bohan Zheng

With Internet of Things (IoT) being prevalently adopted in recent years, traditional machine learning and data mining methods can hardly be competent to deal with the complex big data problems if applied alone. However, hybridizing those who have complementary advantages could achieve optimized practical solutions. This work discusses how to solve multivariate regression problems and extract intrinsic knowledge by hybridizing Self-Organizing Maps (SOM) and Regression Trees. A dual-layer SOM map is developed in which the first layer accomplishes unsupervised learning and then regression tree layer performs supervised learning in the second layer to get predictions and extract knowledge. In this framework, SOM neurons serve as kernels with similar training samples mapped so that regression tree could achieve regression locally. In this way, the difficulties of applying and visualizing local regression on high dimensional data are overcome. Further, we provide an automated growing mechanism based on a few stop criteria without adding new parameters. A case study of solving Electrical Vehicle (EV) range anxiety problem is presented and it demonstrates that our proposed hybrid model is quantitatively precise and interpretive. key words: Multivariate Regression, Big Data, Machine Learning, Data Mining, Self-Organizing Maps (SOM), Regression Tree, Electrical Vehicle (EV), Range Estimation, Internet of Things (IoT)


Sign in / Sign up

Export Citation Format

Share Document