Application of Cross Sections Uncertainty Propagation Framework to Light and Heavy Water Reactor Systems

Author(s):  
Dongli Huang ◽  
Hany S. Abdel-Khalik

Abstract Uncertainty quantification has been recognized by the community as an essential component of best-estimate reactor analysis simulation because it provides a measure by which the credibility of the simulation can be assessed. In a companion paper, a framework for the propagation of nuclear data uncertainties from the multigroup level through lattice physics and core calculations and ultimately to core responses of interest has been developed. The overarching goal of this framework is to automate the propagation, prioritization, mapping, and reduction of uncertainties for reactor analysis core simulation. This paper employs both heavy and light water reactor systems to exemplify the application of this framework. Specifically, the paper is limited to the propagation of the nuclear data starting with the multigroup cross section covariance matrix and down to core responses, e.g., eigenvalue and power distribution, in steady-state core wide calculations. The goal is to demonstrate how the framework employs reduction techniques to compress the uncertainty space into a very small number of active degrees-of-freedom (DOFs), which renders the overall process computationally feasible for day-to-day engineering evaluations.

Author(s):  
Hany S. Abdel-Khalik ◽  
Dongli Huang ◽  
Ondrej Chvala ◽  
G. Ivan Maldonado

Uncertainty quantification is an indispensable analysis for nuclear reactor simulation as it provides a rigorous approach by which the credibility of the predictions can be assessed. Focusing on propagation of multi-group cross-sections, the major challenge lies in the enormous size of the uncertainty space. Earlier work has explored the use of the physics-guided coverage mapping (PCM) methodology to assess the quality of the assumptions typically employed to reduce the size of the uncertainty space. A reduced order modeling (ROM) approach has been further developed to identify the active degrees of freedom (DOFs) of the uncertainty space, comprising all the cross-section few-group parameters required in core-wide simulation. In the current work, a sensitivity study, based on the PCM and ROM results, is applied to identify a suitable compressed representation of the uncertainty space to render feasible the quantification and prioritization of the various sources of uncertainties. While the proposed developments are general to any reactor physics computational sequence, the proposed approach is customized to the TRITON-NESTLE computational sequence, simulating the BWR lattice model and the core model, which will serve as a demonstrative tool for the implementation of the algorithms.


Author(s):  
Tsuyoshi Okawa ◽  
Naoyuki Yomori

Fugen nuclear power plant is a 165MWe, heavy water-moderated, boiling light water-cooled, pressure tube-type reactor developed by JNC, which is the world’s first thermal neutron power reactor to utilize mainly Uranium and Plutonium mixed oxide (MOX) fuel. Fugen has been loaded a total of 726 MOX fuel assemblies since the initial core in 1978. Each incore neutron detector assembly of Fugen composed of four Local Power Monitors (LPM) is located at sixteen positions in the area of heavy water moderator in the core and monitors its power distribution during operation. The thermal neutron flux of Fugen is relatively higher than that of Boiling Water Reactor (BWR), therefore LPM, which is comprised of a fission chamber, degrades more quickly than that of BWR. An Improved Long-life LPM (LLPM) pasted inner surface wall of the chamber with 234U/235U at a ratio of 4 to 1 had been developed through the irradiation test at Japan Material Test Reactor (JMTR). The 234U is converted to 235U with absorption of neutron, and compensates the consumption of 235U. LPM has been loaded to the initial core of Fugen since 1978. JNC had evaluated its sensitivity degradation characteristics through the accumulated irradiation data and the parametric survey for 234σa and 235σa. Based on the experience of evaluation for sensitivity degradation, JNC has applied shuffling operation of LPM assemblies during an annual inspection outage to reduce the operating cost. This operation realizes the reduction of replacing number of LPM assemblies and volume of radioactive waste. This paper describes the sensitivity degradation characteristics of incore neutron detector and the degradation evaluation methods established in Fugen.


2018 ◽  
Vol 4 ◽  
pp. 14 ◽  
Author(s):  
James Dyrda ◽  
Ian Hill ◽  
Luca Fiorito ◽  
Oscar Cabellos ◽  
Nicolas Soppera

Uncertainty propagation to keff using a Total Monte Carlo sampling process is commonly used to solve the issues associated with non-linear dependencies and non-Gaussian nuclear parameter distributions. We suggest that in general, keff sensitivities to nuclear data perturbations are not problematic, and that they remain linear over a large range; the same cannot be said definitively for nuclear data parameters and their impact on final cross-sections and distributions. Instead of running hundreds or thousands of neutronics calculations, we therefore investigate the possibility to take those many cross-section file samples and perform ‘cheap’ sensitivity perturbation calculations. This is efficiently possible with the NEA Nuclear Data Sensitivity Tool (NDaST) and this process we name the half Monte Carlo method (HMM). We demonstrate that this is indeed possible with a test example of JEZEBEL (PMF001) drawn from the ICSBEP handbook, comparing keff directly calculated with SERPENT to those predicted with NDaST. Furthermore, we show that one may retain the normal NDaST benefits; a deeper analysis of the resultant effects in terms of reaction and energy breakdown, without the normal computational burden of Monte Carlo (results within minutes, rather than days). Finally, we assess the rationality of using either full or HMMs, by also using the covariance data to do simple linear 'sandwich formula' type propagation of uncertainty onto the selected benchmarks. This allows us to draw some broad conclusions about the relative merits of selecting a technique with either full, half or zero degree of Monte Carlo simulation


2021 ◽  
Vol 247 ◽  
pp. 09026
Author(s):  
A.G. Nelson ◽  
K.M. Ramey ◽  
F. Heidet

The nuclear data evaluation process inherently yields a nuclear data set designed to produce accurate results for the neutron energy spectra corresponding to a specific benchmark suite of experiments. When studying reactors with spectral conditions outside of, or not well represented by, the experimental database used to evaluate the nuclear data, care should be given to the relevance of the nuclear data used. In such cases, larger biases or uncertainties may be present than in a reactor with well-represented spectra. The motivation of this work is to understand the magnitude of differences between recent nuclear data libraries to provide estimates for expected variability in criticality and power distribution results for sodiumcooled, steel-reflected, metal-fueled fast reactor designs. This work was specifically performed by creating a 3D OpenMC model of a sodium-cooled, steel-reflected, metal-fueled fast reactor similar to the FASTER design but without a thermal test region. This OpenMC model was used to compare the differences in eigenvalues, reactivity coefficients, and the spatial and energetic effects on flux and power distributions between the ENDF/B-VII.0, ENDF/B-VII.1, ENDF/B-VIII.0, JEFF-3.2, and JEFF-3.3 nuclear data libraries. These investigations have revealed that reactivity differences between the above libraries can vary by nearly 900 pcm and the fine-group fluxes can vary by up to 18% in individual groups. Results also show a strong variation in the flux and power distributions near the fuel/reflector interface due to the high variability in the 56Fe cross sections in the libraries examined. This indicates that core design efforts of a sodium-cooled, steel-reflected, metalfueled reactor will require the application of relatively large nuclear data uncertainties and/or the development of a representative benchmark-quality experiment.


2021 ◽  
Vol 247 ◽  
pp. 15003
Author(s):  
G. Valocchi ◽  
P. Archier ◽  
J. Tommasi

In this paper, we present a sensitivity analysis of the beta effective to nuclear data for the UM17x17 experiment that has been performed in the EOLE reactor. This work is carried out using the APOLLO3® platform. Regarding the flux calculation, the standard two-step approach (lattice/core) is used. For what concerns the delayed nuclear data, they are processed to be directly used in the core calculation without going through the lattice one. We use the JEFF-3.1.1 nuclear data library for cross-sections and delayed data. The calculation of k-effective and beta effective is validated against a TRIPOLI4® one while the main sensitivities are validated against direct calculation. Finally, uncertainty propagation is performed using the COMAC-V2.0 covariance library.


2021 ◽  
Vol 247 ◽  
pp. 07003
Author(s):  
A. Sargeni ◽  
E. Ivanov

The paper presents our first results of the exercise III-I-2c from the OECD-NEA UAM-LWR benchmark intended to an elaboration of the methodology of uncertainty propagation. The considered case studied a full PWR core behavior in fast (~0.1 sec) rod ejection transient. According to the benchmark, the core represented a Hot Zero Power state. Authors used brute-force sampling propagating nuclear data and thermo-fluid uncertainties using 3D computational IRSN chain HEMERA. It couples the reactor physics code CRONOS and thermal-hydraulic core code FLICA4. The nuclear data uncertainties were represented in a form of cross sections standard deviations (in percentage of the mean cross sections values) supplied by the UAM team. In addition to the original benchmark, the study includes a case with an increased power peak by supplementary rod ejection, i.e. with higher reactivity. Both the results are similar to what we obtained in the mini-core rod ejection: the power standard deviation follows, in percentage of the mean power, the mean power curve. We split the variance with a direct calculation: once the cross sections are modified and the thermal-hydraulics inputs are kept constant, another time the contrary. The results show that uncertainties dues to nuclear data dominate over ones due to the thermal-flow area. Furthermore, the major contributors in peak-of-power variance lie in a fast group of cross sections.


2020 ◽  
Vol 67 (6) ◽  
pp. 1076-1085 ◽  
Author(s):  
R. J. Desai ◽  
B. M. Patre ◽  
R. K. Munje ◽  
A. P. Tiwari ◽  
S. R. Shimjith

2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Thomas A. Ferguson ◽  
Eleodor M. Nichita

Abstract To reduce computational expenses, full-core production-type neutronics calculations are customarily performed using a simplified core-model whereby large regions of the core, called nodes, are assumed to be homogeneous. The process of generating the few-group homogenized-node macroscopic cross sections is called lattice homogenization. The simplest homogenization method is standard homogenization (SH) and full-core models based on it do not usually reproduce heterogeneous-core calculations too closely. To improve agreement between node-homogenized core results and heterogeneous-core results, advanced homogenization techniques are used. Such techniques tend to use additional parameters besides homogenized macroscopic cross sections. Superhomogenization (SPH) is an advanced lattice homogenization method, which has been developed initially for light-water-reactor (LWR) lattices whereby fuel elements are arranged in a rectangular array. It has the advantage of not requiring any modification to the full-core diffusion code for its implementation. For LWRs, SPH establishes neutronic equivalence between detailed-geometry heterogeneous fuel-pin cells and homogenized fuel-pin cells by adjusting homogenized multigroup macroscopic cross sections and diffusion coefficients. This work investigates the possible use of the SPH methodology for pressurized heavy-water reactor (PHWR) lattices whose fuel pins are arranged in concentric rings rather than in a rectangular array. Results for single-node (SN) as well as multinode (MN) lattice-calculation models are presented. Results show that, with proper region definition, the SPH methodology can be used for PHWR lattices but that improvement in homogenization accuracy is only marginal compared with SH when comparing results for the same type of lattice model (SN or MN).


2019 ◽  
Vol 211 ◽  
pp. 07002 ◽  
Author(s):  
Shengli Chen ◽  
David Bernard ◽  
Pascal Archier ◽  
Cyrille De Saint Jean ◽  
Gilles Noguere ◽  
...  

Correlations between neutron inelastic scatterings angular distributions are not included in the Joint Evaluated Fission and Fusion (JEFF) nuclear data library, while they are key quantities for uncertainty propagation of nuclear data. By reproducing the angle-integrated cross sections and uncertainties of JEFF-3.1.1, the present work obtains covariance matrix between high energy model parameters using the least square method implemented in the CONRAD code. With this matrix, it is possible to generate correlations between angle-integrated cross sections and angular distributions, which are usually presented by Legendre coefficients. As expected, strong correlations are found, for example, between the Legendre coefficients of elastic and first-level-inelastic scatterings and the angle-integrated total, elastic, total inelastic cross sections.


Sign in / Sign up

Export Citation Format

Share Document