A MACHINE-LEARNT WALL FUNCTION FOR ROTATING DIFFUSERS

2021 ◽  
pp. 1-37
Author(s):  
Lorenzo Tieghi ◽  
Alessandro Corsini ◽  
Giovanni Delibra ◽  
Francesco Aldo Tucci

Abstract Data-driven techniques have proved their effectiveness in many engineering applications. Machine-learning has gradually become a paradigm to explore innovative designs in turbomachinery. However, industrial Computational Fluid Dynamics (CFD) experts are still reluctant to embed similar approaches in standard practice and very few solutions have been proposed so far. The aim of the work is to prove that standard wall treatments can obtain serious benefits from machine-learning modelling. Turbomachinery flow modelling lives in a constant compromise between accuracy and the computational costs of numerical simulations. One of the key factors of process is defining a proper wall treatment. Many works point out how insufficient resolutions of boundary layers may lead to incorrect predictions of turbomachinery performances. Wall functions are universally exploited to replicate the physics of boundary layers where grid resolution does not suffice. Despite their popularity, these functions are frequently applied in flows where the ground assumptions cease to be true, such as rotating passages or swirled flows. In these flows, the mathematical formulations of wall functions do not account for the distortion on the boundary layer due to the combined action of centrifugal and Coriolis forces. Here we will derive a wall function for rotating passages, through means of machine-learning. The algorithm is directly implemented in the N-S equations solver. Cross-validation results show that the machine-learnt wall treatment is able to effectively correct the turbulent kinetic energy field near the solid walls, without impairing the accuracy of the k-epsilon model.

Author(s):  
Lorenzo Tieghi ◽  
Alessandro Corsini ◽  
Giovanni Delibra ◽  
Francesco Aldo Tucci

Abstract Data-driven tools and techniques have proved their effectiveness in many engineering applications. Machine-learning has gradually become a paradigm to explore innovative designs in turbomachinery. However, industrial Computational Fluid Dynamics (CFD) experts are still reluctant to embed similar approaches in standard practice and very few solutions have been proposed so far. The aim of the work is to prove that standard wall treatments can obtain serious benefits from machine-learning modelling. Turbomachinery flow modeling lives in a constant compromise between accuracy and the computational costs of numerical simulations. One of the key factors of process is defining a proper wall treatment. Many works point out how insufficient resolutions of boundary layers may lead to incorrect predictions of turbomachinery performances. Wall functions are universally exploited to replicate the physics of boundary layers where grid resolution does not suffice. Widespread wall functions were derived by the observation of a few canonical flows, further expressed as a simple polynomial of Reynolds number and turbulent kinetic energy. Despite their popularity, these functions are frequently applied in flows where the ground assumptions cease to be true, such as rotating passages or swirled flows. In these flows, the mathematical formulations of wall functions do not account for the distortion on the boundary layer due to the combined action of centrifugal and Coriolis forces. Here we will derive a wall function for rotating passages, through means of machine-learning. The algorithm is directly implemented in the N-S equations solver. Cross-validation results show that the machine-learnt wall treatment is able to effectively correct the turbulent kinetic energy field near the solid walls, without impairing the accuracy of the RANS turbulence model in any way.


Author(s):  
Lynne O’Hare ◽  
Thomas J. Scanlon ◽  
Jason M. Reese

In this paper, the phenomenological “wall function” method of Lockerby, Reese and Gallis [1] is used to capture important non-equilibrium aspects of gas microflows. The approach is a constitutive scaling method that captures slip at solid walls and the Knudsen layer, a near-wall region where non-linear constitutive behaviour is observed. In this paper the wall function approach is applied to model gas flow through microscale orifice plates, an industrially relevant application. Two key test geometries are investigated, flow through a constriction in a microchannel and flow through a microscale venturi in a microchannel. A range of incompressible, isothermal flows are analysed with the OpenFOAM computational fluid dynamics (CFD) code [10], and numerical results are validated against available experimental data [12]. Following successful verification of the model, the increasing impact of non-equilibrium effects on flow through the test geometries at higher Knudsen numbers is shown. The integration of the constitutive scaling approach with mainstream CFD is shown to be a flexible technique, well-suited to engineering design applications.


Author(s):  
Marco Colombo ◽  
Antonio Cammi ◽  
Marco E. Ricotti

This paper deals with a comprehensive study of fully developed single-phase turbulent flow and pressure drops in helically coiled channels. To the aim, experimental pressure drops were measured in an experimental campaign conducted at SIET labs, in Piacenza, Italy, in a test facility simulating the Steam Generator (SG) of a Generation III+ integral reactor. Very good agreement is found between data and some of the most common correlations available in literature. Also more data available in literature are considered for comparison. Experimental results are used to assess the results of Computational Fluid Dynamics (CFD) simulations. By means of the commercial CFD package FLUENT, different turbulence models are tested, in particular the Standard, RNG and realizable k-ε models, Shear Stress Transport (SST) k-ω model and second order Reynolds Stress Model (RSM). Moreover, particular attention is placed on the different types of wall functions utilized through the simulations, since they seem to have a great influence on the calculated results. The results aim to be a contribution to the assessment of the capability of turbulence models to simulate fully developed turbulent flow and pressure drops in helical geometry.


Solar Energy ◽  
2021 ◽  
Vol 228 ◽  
pp. 45-52
Author(s):  
Chengwan Zhu ◽  
Wu Liu ◽  
Yaoyao Li ◽  
Xiaomin Huo ◽  
Haotian Li ◽  
...  

2021 ◽  
Author(s):  
Christopher R Wagner ◽  
Timothy Phillips ◽  
Serge Roux ◽  
Joseph P Corrigan

Abstract In this paper, we highlight promising technologies in each phase of a robotic neurosurgery operation, and identify key factors affecting how quickly these technologies will mature into products in the operating room. We focus on specific technology trends in image-guided cranial and spinal procedures, including advances in imaging, machine learning, robotics, and novel interfaces. For each technology, we discuss the required effort to overcome safety or implementation challenges, as well as identifying example regulatory approved products in related fields for comparison. The goal is to provide a roadmap for clinicians as to which robotic and automation technologies are in the developmental pipeline, and which ones are likely to impact their practice sooner, rather than later.


RSC Advances ◽  
2020 ◽  
Vol 10 (7) ◽  
pp. 4014-4022
Author(s):  
Young Woo Kim ◽  
Hee-Jin Yu ◽  
Jung-Sun Kim ◽  
Jinyong Ha ◽  
Jongeun Choi ◽  
...  

A two-step machine learning (ML) algorithm for coronary artery decision making is introduced, to increase the data quality by providing flow characteristics and biometric features by aid of computational fluid dynamics (CFD).


2019 ◽  
Vol 141 (9) ◽  
Author(s):  
Daniel M. Probst ◽  
Mandhapati Raju ◽  
Peter K. Senecal ◽  
Janardhan Kodavasal ◽  
Pinaki Pal ◽  
...  

This work evaluates different optimization algorithms for computational fluid dynamics (CFD) simulations of engine combustion. Due to the computational expense of CFD simulations, emulators built with machine learning algorithms were used as surrogates for the optimizers. Two types of emulators were used: a Gaussian process (GP) and a weighted variety of machine learning methods called SuperLearner (SL). The emulators were trained using a dataset of 2048 CFD simulations that were run concurrently on a supercomputer. The design of experiments (DOE) for the CFD runs was obtained by perturbing nine input parameters using a Monte-Carlo method. The CFD simulations were of a heavy duty engine running with a low octane gasoline-like fuel at a partially premixed compression ignition mode. Ten optimization algorithms were tested, including types typically used in research applications. Each optimizer was allowed 800 function evaluations and was randomly tested 100 times. The optimizers were evaluated for the median, minimum, and maximum merits obtained in the 100 attempts. Some optimizers required more sequential evaluations, thereby resulting in longer wall clock times to reach an optimum. The best performing optimization methods were particle swarm optimization (PSO), differential evolution (DE), GENOUD (an evolutionary algorithm), and micro-genetic algorithm (GA). These methods found a high median optimum as well as a reasonable minimum optimum of the 100 trials. Moreover, all of these methods were able to operate with less than 100 successive iterations, which reduced the wall clock time required in practice. Two methods were found to be effective but required a much larger number of successive iterations: the DIRECT and MALSCHAINS algorithms. A random search method that completed in a single iteration performed poorly in finding optimum designs but was included to illustrate the limitation of highly concurrent search methods. The last three methods, Nelder–Mead, bound optimization by quadratic approximation (BOBYQA), and constrained optimization by linear approximation (COBYLA), did not perform as well.


Sign in / Sign up

Export Citation Format

Share Document