scholarly journals Innovations in Multi-Physics Methods Development, Validation, and Uncertainty Quantification

2021 ◽  
Vol 2 (1) ◽  
pp. 44-56
Author(s):  
Maria Avramova ◽  
Agustin Abarca ◽  
Jason Hou ◽  
Kostadin Ivanov

This paper provides a review of current and upcoming innovations in development, validation, and uncertainty quantification of nuclear reactor multi-physics simulation methods. Multi-physics modelling and simulations (M&S) provide more accurate and realistic predictions of the nuclear reactors behavior including local safety parameters. Multi-physics M&S tools can be subdivided in two groups: traditional multi-physics M&S on assembly/channel spatial scale (currently used in industry and regulation), and novel high-fidelity multi-physics M&S on pin (sub-pin)/sub-channel spatial scale. The current trends in reactor design and safety analysis are towards further development, verification, and validation of multi-physics multi-scale M&S combined with uncertainty quantification and propagation. Approaches currently applied for validation of the traditional multi-physics M&S are summarized and illustrated using established Nuclear Energy Agency/Organization for Economic Cooperation and Development (NEA/OECD) multi-physics benchmarks. Novel high-fidelity multi-physics M&S allow for insights crucial to resolve industry challenge and high impact problems previously impossible with the traditional tools. Challenges in validation of novel multi-physics M&S are discussed along with the needs for developing validation benchmarks based on experimental data. Due to their complexity, the novel multi-physics codes are still computationally expensive for routine applications. This fact motivates the use of high-fidelity novel models and codes to inform the low-fidelity traditional models and codes, leading to improved traditional multi-physics M&S. The uncertainty quantification and propagation across different scales (multi-scale) and multi-physics phenomena are demonstrated using the OECD/NEA Light Water Reactor Uncertainty Analysis in Modelling benchmark framework. Finally, the increasing role of data science and analytics techniques in development and validation of multi-physics M&S is summarized.

Author(s):  
Vijay S. Mahadevan ◽  
Elia Merzari ◽  
Timothy Tautges ◽  
Rajeev Jain ◽  
Aleksandr Obabko ◽  
...  

An integrated multi-physics simulation capability for the design and analysis of current and future nuclear reactor models is being investigated, to tightly couple neutron transport and thermal-hydraulics physics under the SHARP framework. Over several years, high-fidelity, validated mono-physics solvers with proven scalability on petascale architectures have been developed independently. Based on a unified component-based architecture, these existing codes can be coupled with a mesh-data backplane and a flexible coupling-strategy-based driver suite to produce a viable tool for analysts. The goal of the SHARP framework is to perform fully resolved coupled physics analysis of a reactor on heterogeneous geometry, in order to reduce the overall numerical uncertainty while leveraging available computational resources. The coupling methodology and software interfaces of the framework are presented, along with verification studies on two representative fast sodium-cooled reactor demonstration problems to prove the usability of the SHARP framework.


2012 ◽  
Vol 48 ◽  
pp. 108-122 ◽  
Author(s):  
Armando Miguel Gomez-Torres ◽  
Victor Hugo Sanchez-Espinoza ◽  
Kostadin Ivanov ◽  
Rafael Macian-Juan

Author(s):  
Ahmad Moghrabi ◽  
David Raymond Novog

The Canadian pressure-tube super critical water-cooled reactor (PT-SCWR) is an advanced generation IV reactor concept which is considered as an evolution of the conventional Canada Deuterium Uranium (CANDU) reactor that includes both pressure tubes and a low temperature and pressure heavy water moderator. The Canadian PT-SCWR fuel assembly utilizes a plutonium and thorium fuel mixture with supercritical light water coolant flowing through the high-efficiency re-entrance channel (HERC). In this work, the impact of fuel depletion on the evolution of lattice physics phenomena was investigated starting from fresh fuel to burnup conditions (25 MW d kg−1 [HM]) through sensitivity and uncertainty analyses using the lattice physics modules in standardized computer analysis for licensing evaluation (SCALE). Given the evolution of key phenomena such as void reactivity in traditional CANDU reactors with burnup, this study focuses on the impact of fission products, 233U breeding, and minor actinides on fuel performance. The work shows that the most significant change in fuel properties with burnup is the depletion of fission isotopes of Pu and the buildup of high-neutron cross section fission products, resulting in a decrease in cell k∞ with burnup as expected. Other impacts such as the presence of protactinium and uranium-233 are also discussed. When the feedback coefficients are assessed in terms of reactivity, there is considerable variation as a function of fuel depletion; however, when assessed as Δk (without normalization to the reference reactivity which changes with burnup), the net changes are almost invariant with depletion.


Author(s):  
Zhuo Wang ◽  
Chen Jiang ◽  
Mark F. Horstemeyer ◽  
Zhen Hu ◽  
Lei Chen

Abstract One of significant challenges in the metallic additive manufacturing (AM) is the presence of many sources of uncertainty that leads to variability in microstructure and properties of AM parts. Consequently, it is extremely challenging to repeat the manufacturing of a high-quality product in mass production. A trial-and-error approach usually needs to be employed to attain a product with high quality. To achieve a comprehensive uncertainty quantification (UQ) study of AM processes, we present a physics-informed data-driven modeling framework, in which multi-level data-driven surrogate models are constructed based on extensive computational data obtained by multi-scale multi-physical AM models. It starts with computationally inexpensive metamodels, followed by experimental calibration of as-built metamodels and then efficient UQ analysis of AM process. For illustration purpose, this study specifically uses the thermal level of AM process as an example, by choosing the temperature field and melt pool as quantity of interest. We have clearly showed the surrogate modeling in the presence of high-dimensional response (e.g. temperature field) during AM process, and illustrated the parameter calibration and model correction of an as-built surrogate model for reliable uncertainty quantification. The experimental calibration especially takes advantage of the high-quality AM benchmark data from National Institute of Standards and Technology (NIST). This study demonstrates the potential of the proposed data-driven UQ framework for efficiently investigating uncertainty propagation from process parameters to material microstructures, and then to macro-level mechanical properties through a combination of advanced AM multi-physics simulations, data-driven surrogate modeling and experimental calibration.


2021 ◽  
Author(s):  
Francesco Rizzi ◽  
Eric Parish ◽  
Patrick Blonigan ◽  
John Tencer

<p>This talk focuses on the application of projection-based reduced-order models (pROMs) to seismic elastic shear waves. Specifically, we present a method to efficiently propagate parametric uncertainties through the system using a novel formulation of the Galerkin ROM that exploits modern many-core computing nodes.</p><p>Seismic modeling and simulation is an active field of research because of its importance in understanding the generation, propagation and effects of earthquakes as well as artificial explosions. We stress two main challenges involved: (a) physical models contain a large number of parameters (e.g., anisotropic material properties, signal forms and parametrizations); and (b) simulating these systems at global scale with high-accuracy requires a large computational cost, often requiring days or weeks on a supercomputer. Advancements in computing platforms have enabled researchers to exploit high-fidelity computational models, such as highly-resolved seismic simulations, for certain types of analyses. Unfortunately, for analyses requiring many evaluations of the forward model (e.g., uncertainty quantification, engineering design), the use of high-fidelity models often remains impractical due to their high computational cost. Consequently, analysts often rely on lower-cost, lower-fidelity surrogate models for such problems.</p><p>Broadly speaking, surrogate models fall under three categories, namely (a) data fits, which construct an explicit mapping (e.g., using polynomials, Gaussian processes) from the system's parameters to the system response of interest, (b) lower-fidelity models, which simplify the high-fidelity model (e.g., by coarsening the mesh, employing a lower finite-element order, or neglecting physics), and (c) pROMs which reduce the number of degrees of freedom in the high-fidelity model by a projection process of the full-order model onto a subspace identified from high-fidelity data. The main advantage of pROMs is that they apply a projection process directly to the equations governing the high-fidelity model, thus enabling stronger guarantees (e.g., of structure preservation or of accuracy) and more accurate a posteriori error bounds.</p><p>State-of-the-art Galerkin ROM formulations express the state as a rank-1 tensor (i.e., a vector), leading to computational kernels that are memory bandwidth bound and, therefore, ill-suited for scalable performance on modern many-core and hybrid computing nodes. In this work, we introduce a reformulation, called rank-2 Galerkin, of the Galerkin ROM for linear time-invariant (LTI) dynamical systems which converts the nature of the ROM problem from memory bandwidth to compute bound, and apply it to elastic seismic shear waves in an axisymmetric domain. Specifically, we present an end-to-end demonstration of using the rank-2 Galerkin ROM in a Monte Carlo sampling study, showing that the rank-2 Galerkin ROM is 970 times more efficient than the full order model, while maintaining excellent accuracy in both the mean and statistics of the field.</p>


2020 ◽  
Vol 12 (12) ◽  
pp. 5059
Author(s):  
Xinzheng Lu ◽  
Donglian Gu ◽  
Zhen Xu ◽  
Chen Xiong ◽  
Yuan Tian

To improve the ability to prepare for and adapt to potential hazards in a city, efforts are being invested in evaluating the performance of the built environment under multiple hazard conditions. An integrated physics-based multi-hazard simulation framework covering both individual buildings and urban areas can help improve analysis efficiency and is significant for urban planning and emergency management activities. Therefore, a city information model-powered multi-hazard simulation framework is proposed considering three types of hazards (i.e., earthquake, fire, and wind hazards). The proposed framework consists of three modules: (1) data transformation, (2) physics-based hazard analysis, and (3) high-fidelity visualization. Three advantages are highlighted: (1) the database with multi-scale models is capable of meeting the various demands of stakeholders, (2) hazard analyses are all based on physics-based models, leading to rational and scientific simulations, and (3) high-fidelity visualization can help non-professional users better understand the disaster scenario. A case study of the Tsinghua University campus is performed. The results indicate the proposed framework is a practical method for multi-hazard simulations of both individual buildings and urban areas and has great potential in helping stakeholders to assess and recognize the risks faced by important buildings or the whole city.


Sign in / Sign up

Export Citation Format

Share Document