scholarly journals Assessment of Discretization Uncertainty Estimators Based On Grid Refinement Studies

Author(s):  
Luis Eca ◽  
Guilherme Vaz ◽  
Martin Hoekstra ◽  
Scott Doebling ◽  
Robert Singleton ◽  
...  

Abstract This paper presents the assessment of the performance of 9 discretization uncertainty estimates based on grid refinement studies including methods that use grid triplets and others that use a largest number of data points, which in the present study was set to five. The uncertainty estimates are performed for the data set proposed for the 2017 ASME Workshop on Estimation of Discretization Errors including functional and local flow quantities from the two-dimensional incompressible flows over a flat plate and the NACA 0012 airfoil. The data were generated with a RANS solver using three eddy-viscosity turbulence models with double precision and sufficiently tight iterative convergence criteria to ensure that the numerical error is dominated by the discretization error. The use of several geometrically similar grid sets with different near-wall cell sizes lead to a wide range of convergence properties for the selected flow quantities. The evaluation of uncertainty estimates is based on the ratio of the estimated uncertainty over the "exact error" that is obtained from an "exact solution" obtained from extra grid sets significantly more refined than those used to generate the Workshop data. Although none of the methods tested fulfilled the goal of bounding the "exact error" 95 times out of 100 that was tested, the results suggest that the methods tested are useful tools for the assessment of the numerical uncertainty of practical numerical simulations even for cases where it is not possible to generate data in the "asymptotic range".

Author(s):  
L. Eça ◽  
G. Vaz ◽  
M. Hoekstra

This paper presents grid refinement studies for statistically steady, two-dimensional (2D) flows of an incompressible fluid: a flat plate at Reynolds numbers equal to 107, 108, and 109 and the NACA 0012 airfoil at angles of attack of 0, 4, and 10 deg with Re = 6 × 106. Results are based on the numerical solution of the Reynolds-averaged Navier–Stokes (RANS) equations supplemented by one of three eddy-viscosity turbulence models of choice: the one-equation model of Spalart and Allmaras and the two-equation models k – ω SST and k−kL. Grid refinement studies are performed in sets of geometrically similar structured grids, permitting an unambiguous definition of the typical cell size, using double precision and an iterative convergence criterion that guarantees a numerical error dominated by the discretization error. For each case, different grid sets with the same number of cells but different near-wall spacings are used to generate a data set that allows more than one estimation of the numerical uncertainty for similar grid densities. The selected flow quantities include functional (integral), surface, and local flow quantities, namely, drag/resistance and lift coefficients; skin friction and pressure coefficients at the wall; and mean velocity components and eddy viscosity at specified locations in the boundary-layer region. An extra set of grids significantly more refined than those proposed for the estimation of the numerical uncertainty is generated for each test case. Using power law extrapolations, these extra solutions are used to obtain an approximation of the exact solution that allows the assessment of the performance of the numerical uncertainty estimations performed for the basis data set. However, it must be stated that with grids up to 2.5 (plate) and 8.46 (airfoil) million cells in two dimensions, the asymptotic range is not attained for many of the selected flow quantities. All this data is available online to the community.


Author(s):  
Stephen R. Codyer ◽  
Mehdi Raessi ◽  
Gaurav Khanna

We present a GPU accelerated numerical solver for incompressible, immiscible, two-phase fluid flows. This leads to a significant simulation speed-up and thus, the capability to have finer grid sizes and/or more accurate convergence criteria. We solve the Navier-Stokes equations, which include the surface tension force, by using a two-step projection method requiring the iterative solution to a pressure Poisson problem at each time step. However, running a serial linear algebra solver on a CPU to solve the pressure Poisson problem can take 50–99.9% of the total simulation time. To remove this bottleneck, we employ the large parallelization capabilities of GPUs by developing a double-precision parallel linear algebra solver, SCGPU, using NVIDIA’s CUDA v.4.0 libraries. The performance of SCGPU in serial simulations is presented, in addition to an evaluation of two pre-packaged GPU linear algebra solvers CUSP and CULA-sparse. We also present preliminary results of a GPU-accelerated MPI CPU flow solver.


Author(s):  
Adrienne B. Little ◽  
Yann Bartosiewicz ◽  
Srinivas Garimella

Passive, heat actuated devices can offer simple and energy-efficient options for a variety of end uses. An ejector pump is one such device that provides reasonable pressure head with no electrical input or moving parts. Useful for a wide range of applications from nuclear reactor cooling to vapor compression in waste-heat-driven heat pumping and work recovery systems, the flow phenomena inside an ejector must be understood to achieve improvements in component design and efficiency. In an effort to obtain insights into the flow phenomena inside an ejector, and to evaluate the effectiveness of commonly used computational tools in predicting these conditions, this study presents a set of shadowgraph images of flow inside a large-scale air ejector, and compares them to computational simulations of the same flow. On-design and off-design conditions are considered where the suction flow is choked and not choked, respectively. The computational simulations used for comparison apply k-ε RNG and k-ω SST turbulence models available in ANSYS FLUENT to 2D, locally-refined rectangular meshes for ideal gas air flow. Experimental and computational results show that on-design ejector operation is predicted with reasonable accuracy, but accuracy with the same models is not adequate at off-design conditions. This is attributed to an inability of turbulence models to predict shock/expansion interaction with the motive jet boundary, as well as the strength and position of flow features. Exploration of local flow features shows that the k-ω SST model predicts the location of flow features, as well as global inlet mass flow rates, with greater accuracy. It is concluded that to provide a rigorous validation of turbulence models for the application of modeling ejector flow, it is necessary to rely on off-design data where more complex phenomena occur, such as flow separation, strong boundary layer/shock interaction, and unsteady flow. Such validation will help refine turbulence models for future ejector design purposes, and allow for more efficient ejector operation.


Author(s):  
Darrin W. Stephens ◽  
Aleksandar Jemcov ◽  
Chris Sideroff

In this work verification and validation of Reynolds Averaged Navier-Stokes (RANS) turbulence models for incompressible flows was performed on the numerical library, Caelus [1]. Caelus is free and open source licensed under the GNU Public License (GPL). The focus of this study is on the verification and validation of the k-ω SST [2, 3], Spalart-Allmaras [4], and realizable k-ε models [5]. The cases used in this work include the zero pressure gradient flat plate, two-dimensional bump in a channel flow, NACA 0012 airfoil, and backward facing step. All cases except the backward facing step include mesh dependency studies. A comprehensive description of the test cases and computed results are provided. The results were, in general, found to be in excellent agreement with external data suggesting that the turbulence model implementations in Caelus are correct. A companion study on verification and validation of a predictor corrector steady-state solver algorithm [6] had similar goals and results as this work.


Author(s):  
Hermann F. Fasel ◽  
Dominic A. von Terzi ◽  
Richard D. Sandberg

A Flow Simulation Methodology (FSM) is presented for computing the time-dependent behavior of complex compressible turbulent flows. The development of FSM was initiated in close collaboration with C. Speziale (then at Boston University). The objective of FSM is to provide the proper amount of turbulence modelling for the unresolved scales while directly computing the largest scales. The strategy is implemented by using state-of-the-art turbulence models (as developed for RANS) and scaling of the model terms with a “contribution function”. The contribution function is dependent on the local and instantaneous “physical” resolution in the computation. This “physical” resolution is determined during the actual simulation by comparing the size of the smallest relevant scales to the local grid size used in the computation. The contribution function is designed such that it provides no modelling if the computation is locally well resolved so that it approaches a DNS in the fine-grid limit and such that it provides modelling of all scales in the coarsegrid limit and thus approaches an unsteady RANS calculation. In between these resolution limits, the contribution function adjusts the necessary modelling for the unresolved scales while the larger (resolved) scales are computed as in traditional LES. However, FSM is distinctly different from LES in that it allows for a consistent transition between (unsteady) RANS, LES, and DNS within the same simulation depending on the local flow behavior and “physical” resolution. As a consequence, FSM should require considerably fewer grid points for a given calculation than would be necessary for a traditional LES. This conjecture is substantiated by employing FSM to calculate the flow over a backward-facing step at low Mach number and a supersonic, axisymmetric baseflow. These examples were chosen such that they expose, on the one hand, the inherent difficulties of simulating (physically) complex flows, and, on the other hand, demonstrate the potential of the FSM approach for a wide range of compressible flows.


2020 ◽  
Author(s):  
Marc Philipp Bahlke ◽  
Natnael Mogos ◽  
Jonny Proppe ◽  
Carmen Herrmann

Heisenberg exchange spin coupling between metal centers is essential for describing and understanding the electronic structure of many molecular catalysts, metalloenzymes, and molecular magnets for potential application in information technology. We explore the machine-learnability of exchange spin coupling, which has not been studied yet. We employ Gaussian process regression since it can potentially deal with small training sets (as likely associated with the rather complex molecular structures required for exploring spin coupling) and since it provides uncertainty estimates (“error bars”) along with predicted values. We compare a range of descriptors and kernels for 257 small dicopper complexes and find that a simple descriptor based on chemical intuition, consisting only of copper-bridge angles and copper-copper distances, clearly outperforms several more sophisticated descriptors when it comes to extrapolating towards larger experimentally relevant complexes. Exchange spin coupling is similarly easy to learn as the polarizability, while learning dipole moments is much harder. The strength of the sophisticated descriptors lies in their ability to linearize structure-property relationships, to the point that a simple linear ridge regression performs just as well as the kernel-based machine-learning model for our small dicopper data set. The superior extrapolation performance of the simple descriptor is unique to exchange spin coupling, reinforcing the crucial role of choosing a suitable descriptor, and highlighting the interesting question of the role of chemical intuition vs. systematic or automated selection of features for machine learning in chemistry and material science.


2019 ◽  
Vol 16 (7) ◽  
pp. 808-817 ◽  
Author(s):  
Laxmi Banjare ◽  
Sant Kumar Verma ◽  
Akhlesh Kumar Jain ◽  
Suresh Thareja

Background: In spite of the availability of various treatment approaches including surgery, radiotherapy, and hormonal therapy, the steroidal aromatase inhibitors (SAIs) play a significant role as chemotherapeutic agents for the treatment of estrogen-dependent breast cancer with the benefit of reduced risk of recurrence. However, due to greater toxicity and side effects associated with currently available anti-breast cancer agents, there is emergent requirement to develop target-specific AIs with safer anti-breast cancer profile. Methods: It is challenging task to design target-specific and less toxic SAIs, though the molecular modeling tools viz. molecular docking simulations and QSAR have been continuing for more than two decades for the fast and efficient designing of novel, selective, potent and safe molecules against various biological targets to fight the number of dreaded diseases/disorders. In order to design novel and selective SAIs, structure guided molecular docking assisted alignment dependent 3D-QSAR studies was performed on a data set comprises of 22 molecules bearing steroidal scaffold with wide range of aromatase inhibitory activity. Results: 3D-QSAR model developed using molecular weighted (MW) extent alignment approach showed good statistical quality and predictive ability when compared to model developed using moments of inertia (MI) alignment approach. Conclusion: The explored binding interactions and generated pharmacophoric features (steric and electrostatic) of steroidal molecules could be exploited for further design, direct synthesis and development of new potential safer SAIs, that can be effective to reduce the mortality and morbidity associated with breast cancer.


Author(s):  
Eun-Young Mun ◽  
Anne E. Ray

Integrative data analysis (IDA) is a promising new approach in psychological research and has been well received in the field of alcohol research. This chapter provides a larger unifying research synthesis framework for IDA. Major advantages of IDA of individual participant-level data include better and more flexible ways to examine subgroups, model complex relationships, deal with methodological and clinical heterogeneity, and examine infrequently occurring behaviors. However, between-study heterogeneity in measures, designs, and samples and systematic study-level missing data are significant barriers to IDA and, more broadly, to large-scale research synthesis. Based on the authors’ experience working on the Project INTEGRATE data set, which combined individual participant-level data from 24 independent college brief alcohol intervention studies, it is also recognized that IDA investigations require a wide range of expertise and considerable resources and that some minimum standards for reporting IDA studies may be needed to improve transparency and quality of evidence.


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 348
Author(s):  
Choongsang Cho ◽  
Young Han Lee ◽  
Jongyoul Park ◽  
Sangkeun Lee

Semantic image segmentation has a wide range of applications. When it comes to medical image segmentation, its accuracy is even more important than those of other areas because the performance gives useful information directly applicable to disease diagnosis, surgical planning, and history monitoring. The state-of-the-art models in medical image segmentation are variants of encoder-decoder architecture, which is called U-Net. To effectively reflect the spatial features in feature maps in encoder-decoder architecture, we propose a spatially adaptive weighting scheme for medical image segmentation. Specifically, the spatial feature is estimated from the feature maps, and the learned weighting parameters are obtained from the computed map, since segmentation results are predicted from the feature map through a convolutional layer. Especially in the proposed networks, the convolutional block for extracting the feature map is replaced with the widely used convolutional frameworks: VGG, ResNet, and Bottleneck Resent structures. In addition, a bilinear up-sampling method replaces the up-convolutional layer to increase the resolution of the feature map. For the performance evaluation of the proposed architecture, we used three data sets covering different medical imaging modalities. Experimental results show that the network with the proposed self-spatial adaptive weighting block based on the ResNet framework gave the highest IoU and DICE scores in the three tasks compared to other methods. In particular, the segmentation network combining the proposed self-spatially adaptive block and ResNet framework recorded the highest 3.01% and 2.89% improvements in IoU and DICE scores, respectively, in the Nerve data set. Therefore, we believe that the proposed scheme can be a useful tool for image segmentation tasks based on the encoder-decoder architecture.


Sign in / Sign up

Export Citation Format

Share Document