scholarly journals Dynamics of droplet migration in oscillatory and pulsating microchannel flows and prediction and uncertainty quantification of its lateral equilibrium position using multifidelity Gaussian processes

2021 ◽  
Vol 33 (6) ◽  
pp. 062010
Author(s):  
Ali Lafzi ◽  
Sadegh Dabiri
Author(s):  
Clay S. Norrbin ◽  
Dara W. Childs

The long length of sub-sea Electric Submersible Pumps (ESPs) requires a large amount of annular seals. Loading caused by gravity and housing curvature changes the Static Equilibrium Position (SEP) of the rotor in these seals. This analysis predicts the SEP due to gravity and/or well curvature loading. The analysis also interfaces displays the rotordynamics around the SEP. A static and rotordynamic analysis is presented for a previously studied ESP model. This study differs by first finding the SEP and then performing a rotordynamic analysis about the SEP. Predictions are shown in a horizontal and a vertical orientation. In these two configurations, viscosities and clearances are varied through 4 cases: 1X 1cP, 3X 1cP, 1X 30cP, and 3X 30cP. In a horizontal, straight-housing position, the model includes gravity and buoyancy on the shaft. At 1cP-1X and 1cP-3X, the horizontal statics show a moderate eccentricity ratio for the shaft with respect to the housing. With 30cP-1X, the predicted static eccentricity ratio is low at 0.08. With 30cP-3X, the predicted eccentricity ratio increases to 0.33. Predictions for a vertical case of the same model are also presented. The curvature of the housing is varied in the Y-Z plane until rub or close-to-wall rub is expected. The curvature needed for a rub with a 1X 1cP fluid is 7.5 degrees of curvature. Curvature has little impact on stability. With both 1X 30cP and 3X 30cP, the maximum curvature for a static rub are over 25 degrees of curvature. Both 1X 30cP and 3X 30cP remain unstable with increasing curvature.


2018 ◽  
Vol 140 (6) ◽  
Author(s):  
Clay S. Norrbin ◽  
Dara W. Childs

The long length of subsea electric submersible pumps (ESPs) requires a large amount of annular seals. Loading caused by gravity and housing curvature changes the static equilibrium position (SEP) of the rotor in these seals. This analysis predicts the SEP due to gravity and/or well curvature loading. The analysis also displays the rotordynamics around the SEP. A static and rotordynamic analysis is presented for a previously studied ESP model. This study differs by first finding the SEP and then performing a rotordynamic analysis about the SEP. Predictions are shown in a horizontal and a vertical orientation. In these two configurations, viscosities and clearances are varied through four cases: 1X 1cP, 3X 1cP, 1X 30cP, and 3X 30cP. In a horizontal, straight-housing position, the model includes gravity and buoyancy on the shaft. At 1cP-1X and 1cP-3X, the horizontal statics, show a moderate eccentricity ratio for the shaft with respect to the housing. With 30cP-1X, the predicted static eccentricity ratio is low at 0.08. With 30cP-3X, the predicted eccentricity ratio increases to 0.33. Predictions for a vertical case of the same model are also presented. The curvature of the housing is varied in the Y–Z plane until rub or close-to-wall rub is expected. The curvature needed for a rub with a 1X 1cP fluid is 7.5 deg of curvature. Curvature has little impact on stability. With both 1X 30cP and 3X 30cP, the maximum curvature for a static rub is over 25 deg of curvature. Both 1X 30cP and 3X 30cP remain unstable with increasing curvature.


2021 ◽  
Author(s):  
Pramudita S. Palar ◽  
Kemas Zakaria ◽  
Lavi Rizki Zuhal ◽  
Koji Shimoyama ◽  
Rhea P. Liem

2020 ◽  
Author(s):  
Maria Moreno de Castro

<p>The presence of automated decision making continuously increases in today's society. Algorithms based in machine and deep learning decide how much we pay for insurance,  translate our thoughts to speech, and shape our consumption of goods (via e-marketing) and knowledge (via search engines). Machine and deep learning models are ubiquitous in science too, in particular, many promising examples are being developed to prove their feasibility for earth sciences applications, like finding temporal trends or spatial patterns in data or improving parameterization schemes for climate simulations. </p><p>However, most machine and deep learning applications aim to optimise performance metrics (for instance, accuracy, which stands for the times the model prediction was right), which are rarely good indicators of trust (i.e., why these predictions were right?). In fact, with the increase of data volume and model complexity, machine learning and deep learning  predictions can be very accurate but also prone to rely on spurious correlations, encode and magnify bias, and draw conclusions that do not incorporate the underlying dynamics governing the system. Because of that, the uncertainty of the predictions and our confidence in the model are difficult to estimate and the relation between inputs and outputs becomes hard to interpret. </p><p>Since it is challenging to shift a community from “black” to “glass” boxes, it is more useful to implement Explainable Artificial Intelligence (XAI) techniques right at the beginning of the machine learning and deep learning adoption rather than trying to fix fundamental problems later. The good news is that most of the popular XAI techniques basically are sensitivity analyses because they consist of a systematic perturbation of some model components in order to observe how it affects the model predictions. The techniques comprise random sampling, Monte-Carlo simulations, and ensemble runs, which are common methods in geosciences. Moreover, many XAI techniques are reusable because they are model-agnostic and must be applied after the model has been fitted. In addition, interpretability provides robust arguments when communicating machine and deep learning predictions to scientists and decision-makers.</p><p>In order to assist not only the practitioners but also the end-users in the evaluation of  machine and deep learning results, we will explain the intuition behind some popular techniques of XAI and aleatory and epistemic Uncertainty Quantification: (1) the Permutation Importance and Gaussian processes on the inputs (i.e., the perturbation of the model inputs), (2) the Monte-Carlo Dropout, Deep ensembles, Quantile Regression, and Gaussian processes on the weights (i.e, the perturbation of the model architecture), (3) the Conformal Predictors (useful to estimate the confidence interval on the outputs), and (4) the Layerwise Relevance Propagation (LRP), Shapley values, and Local Interpretable Model-Agnostic Explanations (LIME) (designed to visualize how each feature in the data affected a particular prediction). We will also introduce some best-practises, like the detection of anomalies in the training data before the training, the implementation of fallbacks when the prediction is not reliable, and physics-guided learning by including constraints in the loss function to avoid physical inconsistencies, like the violation of conservation laws. </p>


Sign in / Sign up

Export Citation Format

Share Document