scholarly journals Presentation and Discussion of the UAM/Exercise I-1b: “Pin-Cell Burn-Up Benchmark” with the Hybrid Method

2013 ◽  
Vol 2013 ◽  
pp. 1-12 ◽  
Author(s):  
O. Cabellos

The aim of this work is to present the Exercise I-1b “pin-cell burn-up benchmark” proposed in the framework of OECD LWR UAM. Its objective is to address the uncertainty due to the basic nuclear data as well as the impact of processing the nuclear and covariance data in a pin-cell depletion calculation. Four different sensitivity/uncertainty propagation methodologies participate in this benchmark (GRS, NRG, UPM, and SNU&KAERI). The paper describes the main features of the UPM model (hybrid method) compared with other methodologies. The requested output provided by UPM is presented, and it is discussed regarding the results of other methodologies.

2019 ◽  
Vol 211 ◽  
pp. 07008 ◽  
Author(s):  
Oscar Cabellos ◽  
Luca Fiorito

The aim of this work is to review different Monte Carlo techniques used to propagate nuclear data uncertainties. Firstly, we introduced Monte Carlo technique applied for Uncertainty Quantification studies in safety calculations of large scale systems. As an example, the impact of nuclear data uncertainty of JEFF-3.3 235U, 238U and 239Pu is demonstrated for the main design parameters of a typical 3-loop PWR Westinghouse unit. Secondly, the Bayesian Monte Carlo technique for data adjustment is presented. An example for 235U adjustment using criticality and shielding integral benchmarks shows the importance of performing joint adjustment based on different set of integral benchmarks.


2020 ◽  
Vol 6 ◽  
pp. 8 ◽  
Author(s):  
Axel Laureau ◽  
Vincent Lamirand ◽  
Dimitri Rochman ◽  
Andreas Pautz

A correlated sampling technique has been implemented to estimate the impact of cross section modifications on the neutron transport and in Monte Carlo simulations in one single calculation. This implementation has been coupled to a Total Monte Carlo approach which consists in propagating nuclear data uncertainties with random cross section files. The TMC-CS (Total Monte Carlo with Correlated Sampling) approach offers an interesting speed-up of the associated computation time. This methodology is detailed in this paper, together with two application cases to validate and illustrate the gain provided by this technique: the highly enriched uranium/iron metal core reflected by a stainless-steel reflector HMI-001 benchmark, and the PETALE experimental programme in the CROCUS zero-power light water reactor.


2020 ◽  
Vol 6 ◽  
pp. 9
Author(s):  
Axel Laureau ◽  
Vincent Lamirand ◽  
Dimitri Rochman ◽  
Andreas Pautz

The PETALE experimental programme in the CROCUS reactor intends to provide integral measurements to constrain stainless steel nuclear data. This article presents the tools and the methodology developed to design and optimize the experiments, and its operating principle. Two acceleration techniques have been implemented in the Serpent2 code to perform a Total Monte Carlo uncertainty propagation using variance reduction and correlated sampling technique. Their application to the estimation of the expected reaction rates in dosimeters is also discussed, together with the estimation of the impact of the nuisance parameters of aluminium used in the experiment structures.


2018 ◽  
Vol 4 ◽  
pp. 15 ◽  
Author(s):  
Henrik Sjöstrand ◽  
Nicola Asquith ◽  
Petter Helgesson ◽  
Dimitri Rochman ◽  
Steven van der Marck

Random sampling methods are used for nuclear data (ND) uncertainty propagation, often in combination with the use of Monte Carlo codes (e.g., MCNP). One example is the Total Monte Carlo (TMC) method. The standard way to visualize and interpret ND covariances is by the use of the Pearson correlation coefficient, [see formula in PDF] where x or y can be any parameter dependent on ND. The spread in the output, σ, has both an ND component, σND, and a statistical component, σstat. The contribution from σstat decreases the value of ρ, and hence it underestimates the impact of the correlation. One way to address this is to minimize σstat by using longer simulation run-times. Alternatively, as proposed here, a so-called fast correlation coefficient is used, [see formula in PDF] In many cases, cov(xstat; ystat) can be assumed to be zero. The paper explores three examples, a synthetic data study, correlations in the NRG High Flux Reactor spectrum, and the correlations between integral criticality experiments. It is concluded that the use of ρ underestimates the correlation. The impact of the use of ρfast is quantified, and the implication of the results is discussed.


2020 ◽  
Vol 239 ◽  
pp. 22003
Author(s):  
Alexander Vasiliev ◽  
Marco Pecchia ◽  
Dimitri Rochman ◽  
Hakim Ferroukhi ◽  
Erwin Alhassan

In this work, an overview on the relevance of the nuclear data (ND) uncertainties with respect to the Light Water Reactors (LWR) neutron dosimetry is presented. The paper summarizes results of several studies realized at the LRT laboratory of the Paul Scherrer Institute over the past decade. The studies were done using the base LRT calculation methodology for dosimetry assessments, which involves the neutron source distribution representation, obtained based on validated CASMO/SIMULATE core follow calculation models, and the subsequent neutron transport simulations with the MCNP® software. The methodology was validated using as reference data results of numerous measurement programs fulfilled at Swiss NPPs. Namely, the following experimental programs are considered in the given overview: PWR “gradient probes” and BWR fast neutron fluence (FNF) monitors post irradiation examination (PIE). For the both cases, assessments of the nuclear data related uncertainties were performed. When appropriate, a cross-verification of the deterministic and stochastic based uncertainty propagation techniques is provided. Furthermore, the observations on which particular neutron induced reactions contribute dominantly to the overall ND-related uncertainties are demonstrated. The presented results should help with assessing the overall impact of the various nuclear data uncertainties with respect to dosimetry applications and provide relevant feedback to the nuclear data evaluators.


2021 ◽  
Vol 247 ◽  
pp. 10002
Author(s):  
V. Vallet ◽  
J. Huyghe ◽  
C. Vaglio-Gaudard ◽  
D. Lecarpentier ◽  
C. Reynard-Carette

Currently there is no integral experimental data for code validation regarding the decay heat of MOX fuels, excepted fission burst experiments (for fission products contributions at short cooling times) or post-irradiated experiments on nuclide inventories (restricted number of nuclide of interest for decay heat). The uncertainty quantification mainly relies on uncertainty propagation of nuclear data covariances. In the recent years, the transposition method, based on the data assimilation theory, was used in order to transpose the experiment-to-calculation discrepancies at a given set of parameters (cooling time, fuel burnup) to another set of parameters. As an example, this method was used on the CLAB experiments and the experiment-to-calculation discrepancies at 13 years were transposed to an UOX fuel between 5 and 27 years and for burnups from 10 to 50 GWd/t. The purpose of this paper is to study to what extent the transposition method could be used for MOX fuels. In particular, the Dickens fission burst experiment of 239Pu was considered for MOX fuels at short cooling times (< 1h30) and low burnup (< 10 GWd/t). The impact of fission yields (FY) correlations was also discussed. As a conclusion, the efficiency of the transposition process is limited by the experimental uncertainties larger than nuclear data uncertainties, and by the fact that fission burst experiments would only be representative to the FY contribution of the decay heat uncertainty of an irradiated reactor fuel. Nevertheless, this method strengthens the decay heat uncertainties at very short cooling times, previously based only on nuclear data covariance propagation through computation.


Water ◽  
2021 ◽  
Vol 13 (13) ◽  
pp. 1830
Author(s):  
Gullnaz Shahzadi ◽  
Azzeddine Soulaïmani

Computational modeling plays a significant role in the design of rockfill dams. Various constitutive soil parameters are used to design such models, which often involve high uncertainties due to the complex structure of rockfill dams comprising various zones of different soil parameters. This study performs an uncertainty analysis and a global sensitivity analysis to assess the effect of constitutive soil parameters on the behavior of a rockfill dam. A Finite Element code (Plaxis) is utilized for the structure analysis. A database of the computed displacements at inclinometers installed in the dam is generated and compared to in situ measurements. Surrogate models are significant tools for approximating the relationship between input soil parameters and displacements and thereby reducing the computational costs of parametric studies. Polynomial chaos expansion and deep neural networks are used to build surrogate models to compute the Sobol indices required to identify the impact of soil parameters on dam behavior.


Blood ◽  
2020 ◽  
Vol 136 (Supplement 1) ◽  
pp. 43-44
Author(s):  
Amandine Pradier ◽  
Adrien Petitpas ◽  
Anne-Claire Mamez ◽  
Federica Giannotti ◽  
Sarah Morin ◽  
...  

Introduction Allogeneic hematopoietic stem cell transplantation (HSCT) is a well-established therapeutic modality for a variety of hematological malignancies and congenital disorders. One of the major complications of the procedure is graft-versus-host-disease (GVHD) initiated by T cells co-administered with the graft. Removal of donor T cells from the graft is a widely employed and effective strategy to prevent GVHD, although its impact on post-transplant immune reconstitution might significantly affect anti-tumor and anti-infectious responses. Several approaches of T cell depletion (TCD) exist, including in vivo depletion using anti-thymocyte globulin (ATG) and/or post-transplant cyclophosphamide (PTCy) as well as in vitro manipulation of the graft. In this work, we analyzed the impact of different T cell depletion strategies on immune reconstitution after allogeneic HSCT. Methods We retrospectively analysed data from 168 patients transplanted between 2015 and 2019 at Geneva University Hospitals. In our center, several methods for TCD are being used, alone or in combination: 1) In vivo T cell depletion using ATG (ATG-Thymoglobulin 7.5 mg/kg or ATG-Fresenius 25 mg/kg); 2) in vitro partial T cell depletion (pTCD) of the graft obtained through in vitro incubation with alemtuzumab (Campath [Genzyme Corporation, Cambridge, MA]), washed before infusion and administered at day 0, followed on day +1 by an add-back of unmanipulated grafts containing about 100 × 106/kg donor T cells. The procedure is followed by donor lymphocyte infusions at incremental doses starting with 1 × 106 CD3/kg at 3 months to all patients who had received pTCD grafts with RIC in the absence of GVHD; 3) post-transplant cyclophosphamide (PTCy; 50 mg/kg) on days 3 and 4 post-HSCT. Absolute counts of CD3, CD4, CD8, CD19 and NK cells measured by flow cytometry during the first year after allogeneic HSCT were analyzed. Measures obtained from patients with mixed donor chimerism or after therapeutic DLI were excluded from the analysis. Cell numbers during time were compared using mixed-effects linear models depending on the TCD. Multivariable analysis was performed taking into account the impact of clinical factors differing between patients groups (patient's age, donor type and conditioning). Results ATG was administered to 77 (46%) patients, 15 (9%) patients received a pTCD graft and 26 (15%) patients received a combination of both ATG and pTCD graft. 24 (14%) patients were treated with PTCy and 26 (15%) patients received a T replete graft. 60% of patients had a reduced intensity conditioning (RIC). 48 (29%) patients received grafts from a sibling identical donor, 94 (56%) from a matched unrelated donor, 13 (8%) from mismatched unrelated donor and 13 (8%) received haploidentical grafts. TCD protocols had no significant impact on CD3 or CD8 T cell reconstitution during the first year post-HSCT (Figure 1). Conversely, CD4 T cells recovery was affected by the ATG/pTCD combination (coefficient ± SE: -67±28, p=0.019) when compared to the T cell replete group (Figure 1). Analysis of data censored for acute or chronic GVHD requiring treatment or relapse revealed a delay of CD4 T cell reconstitution in the ATG and/or pTCD treated groups on (ATG:-79±27, p=0.004; pTCD:-100±43, p=0.022; ATG/pTCD:-110±33, p&lt;0.001). Interestingly, pTCD alone or in combination with ATG resulted in a better reconstitution of NK cells compared to T replete group (pTCD: 152±45, p&lt;0.001; ATG/pTCD: 94±36, p=0.009; Figure 1). A similar effect of pTCD was also observed for B cells (pTCD: 170±48, p&lt;.001; ATG/pTCD: 127±38, p&lt;.001). The effect of pTCD on NK was confirmed when data were censored for GVHD and relapse (pTCD: 132±60, p=0.028; ATG/pTCD: 106±47, p=0.023) while only ATG/pTCD retained a significant impact on B cells (102±49, p=0.037). The use of PTCy did not affect T, NK or B cell reconstitution when compared to the T cell replete group. Conclusion Our results indicate that all TCD protocols with the only exception of PTCy are associated with a delayed recovery of CD4 T cells whereas pTCD of the graft, alone or in combination with ATG, significantly improves NK and B cell reconstitution. Figure 1 Disclosures No relevant conflicts of interest to declare.


2022 ◽  
Author(s):  
Joelle Mailloux

Abstract The JET 2019-2020 scientific and technological programme exploited the results of years of concerted scientific and engineering work, including the ITER-like wall (ILW: Be wall and W divertor) installed in 2010, improved diagnostic capabilities now fully available, a major Neutral Beam Injection (NBI) upgrade providing record power in 2019-2020, and tested the technical & procedural preparation for safe operation with tritium. Research along three complementary axes yielded a wealth of new results. Firstly, the JET plasma programme delivered scenarios suitable for high fusion power and alpha particle physics in the coming D-T campaign (DTE2), with record sustained neutron rates, as well as plasmas for clarifying the impact of isotope mass on plasma core, edge and plasma-wall interactions, and for ITER pre-fusion power operation. The efficacy of the newly installed Shattered Pellet Injector for mitigating disruption forces and runaway electrons was demonstrated. Secondly, research on the consequences of long-term exposure to JET-ILW plasma was completed, with emphasis on wall damage and fuel retention, and with analyses of wall materials and dust particles that will help validate assumptions and codes for design & operation of ITER and DEMO. Thirdly, the nuclear technology programme aiming to deliver maximum technological return from operations in D, T and D-T benefited from the highest D-D neutron yield in years, securing results for validating radiation transport and activation codes, and nuclear data for ITER.


Sign in / Sign up

Export Citation Format

Share Document