scholarly journals A New Precursor Integral Method for Solving Space-Dependent Kinetic Equations in Neutronic and Thermal-Hydraulic Coupling System

2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Yingjie Wu ◽  
Baokun Liu ◽  
Han Zhang ◽  
Jiong Guo ◽  
Fu Li ◽  
...  

The accurate prediction of the neutronic and thermal-hydraulic coupling system transient behavior is important in nuclear reactor safety analysis, where a large-scale nonlinear coupling system with strong stiffness should be solved efficiently. In order to reduce the stiffness and huge computational cost in the coupling system, the high-performance numerical techniques for solving delayed neutron precursor equation are a key issue. In this work, a new precursor integral method with an exponential approximation is proposed and compared with widely used Taylor approximation-based precursor integral methods. The truncation errors of exponential approximation and Taylor approximation are analyzed and compared. Moreover, a time control technique is put forward which is based on flux exponential approximation. The procedure is tested in a 2D neutron kinetic benchmark and a simplified high-temperature gas-cooled reactor-pebble bed module (HTR-PM) multiphysics problem utilizing the efficient Jacobian-free Newton–Krylov method. Results show that selecting appropriate flux approximation in the precursor integral method can improve the efficiency and precision compared with the traditional method. The computation time is reduced to one-ninth in the HTR-PM model under the same accuracy when applying the exponential integral method with the time adaptive technique.

2021 ◽  
Author(s):  
Samier Pierre ◽  
Raguenel Margaux ◽  
Darche Gilles

Abstract Solving the equations governing multiphase flow in geological formations involves the generation of a mesh that faithfully represents the structure of the porous medium. This challenging mesh generation task can be greatly simplified by the use of unstructured (tetrahedral) grids that conform to the complex geometric features present in the subsurface. However, running a million-cell simulation problem using an unstructured grid on a real, faulted field case remains a challenge for two main reasons. First, the workflow typically used to construct and run the simulation problems has been developed for structured grids and needs to be adapted to the unstructured case. Second, the use of unstructured grids that do not satisfy the K-orthogonality property may require advanced numerical schemes that preserve the accuracy of the results and reduce potential grid orientation effects. These two challenges are at the center of the present paper. We describe in detail the steps of our workflow to prepare and run a large-scale unstructured simulation of a real field case with faults. We perform the simulation using four different discretization schemes, including the cell-centered Two-Point and Multi-Point Flux Approximation (respectively, TPFA and MPFA) schemes, the cell- and vertex-centered Vertex Approximate Gradient (VAG) scheme, and the cell- and face-centered hybrid Mimetic Finite Difference (MFD) scheme. We compare the results in terms of accuracy, robustness, and computational cost to determine which scheme offers the best compromise for the test case considered here.


Mathematics ◽  
2018 ◽  
Vol 6 (8) ◽  
pp. 132 ◽  
Author(s):  
Harwinder Singh Sidhu ◽  
Prashanth Siddhamshetty ◽  
Joseph Kwon

Hydraulic fracturing has played a crucial role in enhancing the extraction of oil and gas from deep underground sources. The two main objectives of hydraulic fracturing are to produce fractures with a desired fracture geometry and to achieve the target proppant concentration inside the fracture. Recently, some efforts have been made to accomplish these objectives by the model predictive control (MPC) theory based on the assumption that the rock mechanical properties such as the Young’s modulus are known and spatially homogenous. However, this approach may not be optimal if there is an uncertainty in the rock mechanical properties. Furthermore, the computational requirements associated with the MPC approach to calculate the control moves at each sampling time can be significantly high when the underlying process dynamics is described by a nonlinear large-scale system. To address these issues, the current work proposes an approximate dynamic programming (ADP) based approach for the closed-loop control of hydraulic fracturing to achieve the target proppant concentration at the end of pumping. ADP is a model-based control technique which combines a high-fidelity simulation and function approximator to alleviate the “curse-of-dimensionality” associated with the traditional dynamic programming (DP) approach. A series of simulations results is provided to demonstrate the performance of the ADP-based controller in achieving the target proppant concentration at the end of pumping at a fraction of the computational cost required by MPC while handling the uncertainty in the Young’s modulus of the rock formation.


2010 ◽  
Vol 132 (3) ◽  
Author(s):  
F. Wei ◽  
G. T. Zheng

Direct time integration methods are usually applied to determine the dynamic response of systems with local nonlinearities. Nevertheless, these methods are computationally expensive to predict the steady state response. To significantly reduce the computational effort, a new approach is proposed for the multiharmonic response analysis of dynamical systems with local nonlinearities. The approach is based on the describing function (DF) method and linear receptance data. With the DF method, the kinetic equations are converted into a set of complex algebraic equations. By using the linear receptance data, the dimension of the complex algebraic equations, which should be solved iteratively, are only related to nonlinear degrees of freedom (DOFs). A cantilever beam with a local nonlinear element is presented to show the procedure and performance of the proposed approach. The approach can greatly reduce the size and computational cost of the problem. Thus, it can be applicable to large-scale systems with local nonlinearities.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Daiji Ichishima ◽  
Yuya Matsumura

AbstractLarge scale computation by molecular dynamics (MD) method is often challenging or even impractical due to its computational cost, in spite of its wide applications in a variety of fields. Although the recent advancement in parallel computing and introduction of coarse-graining methods have enabled large scale calculations, macroscopic analyses are still not realizable. Here, we present renormalized molecular dynamics (RMD), a renormalization group of MD in thermal equilibrium derived by using the Migdal–Kadanoff approximation. The RMD method improves the computational efficiency drastically while retaining the advantage of MD. The computational efficiency is improved by a factor of $$2^{n(D+1)}$$ 2 n ( D + 1 ) over conventional MD where D is the spatial dimension and n is the number of applied renormalization transforms. We verify RMD by conducting two simulations; melting of an aluminum slab and collision of aluminum spheres. Both problems show that the expectation values of physical quantities are in good agreement after the renormalization, whereas the consumption time is reduced as expected. To observe behavior of RMD near the critical point, the critical exponent of the Lennard-Jones potential is extracted by calculating specific heat on the mesoscale. The critical exponent is obtained as $$\nu =0.63\pm 0.01$$ ν = 0.63 ± 0.01 . In addition, the renormalization group of dissipative particle dynamics (DPD) is derived. Renormalized DPD is equivalent to RMD in isothermal systems under the condition such that Deborah number $$De\ll 1$$ D e ≪ 1 .


Author(s):  
Mahdi Esmaily Moghadam ◽  
Yuri Bazilevs ◽  
Tain-Yen Hsia ◽  
Alison Marsden

A closed-loop lumped parameter network (LPN) coupled to a 3D domain is a powerful tool that can be used to model the global dynamics of the circulatory system. Coupling a 0D LPN to a 3D CFD domain is a numerically challenging problem, often associated with instabilities, extra computational cost, and loss of modularity. A computationally efficient finite element framework has been recently proposed that achieves numerical stability without sacrificing modularity [1]. This type of coupling introduces new challenges in the linear algebraic equation solver (LS), producing an strong coupling between flow and pressure that leads to an ill-conditioned tangent matrix. In this paper we exploit this strong coupling to obtain a novel and efficient algorithm for the linear solver (LS). We illustrate the efficiency of this method on several large-scale cardiovascular blood flow simulation problems.


2011 ◽  
Vol 383-390 ◽  
pp. 1470-1476
Author(s):  
Hao Wang ◽  
Ding Guo Shao ◽  
Lu Xu

Lithium battery has been employed widely in many industrial applications. Parameter mismatches between lithium batteries along a series string is the critical limits of the large-scale applications in high power situation. Maintaining equalization between batteries is the key technique in lithium batteries application. This paper summarizes normal equalization techniques and proposed a new type of lithium Battery Equalization and Management System (BEMS) employing the isolated DC-DC converter structure. The system is integrated both equalization functions and management functions by using distributed 3-level controlled structure and digital control technique. With this control method the flexibility of the balance control strategy and the compatibility for different battery strings are both improved dramatically. The experimental results show optimizing equalization, efficiency and the battery string life span has been extended.


2006 ◽  
Vol 18 (12) ◽  
pp. 2959-2993 ◽  
Author(s):  
Eduardo Ros ◽  
Richard Carrillo ◽  
Eva M. Ortigosa ◽  
Boris Barbour ◽  
Rodrigo Agís

Nearly all neuronal information processing and interneuronal communication in the brain involves action potentials, or spikes, which drive the short-term synaptic dynamics of neurons, but also their long-term dynamics, via synaptic plasticity. In many brain structures, action potential activity is considered to be sparse. This sparseness of activity has been exploited to reduce the computational cost of large-scale network simulations, through the development of event-driven simulation schemes. However, existing event-driven simulations schemes use extremely simplified neuronal models. Here, we implement and evaluate critically an event-driven algorithm (ED-LUT) that uses precalculated look-up tables to characterize synaptic and neuronal dynamics. This approach enables the use of more complex (and realistic) neuronal models or data in representing the neurons, while retaining the advantage of high-speed simulation. We demonstrate the method's application for neurons containing exponential synaptic conductances, thereby implementing shunting inhibition, a phenomenon that is critical to cellular computation. We also introduce an improved two-stage event-queue algorithm, which allows the simulations to scale efficiently to highly connected networks with arbitrary propagation delays. Finally, the scheme readily accommodates implementation of synaptic plasticity mechanisms that depend on spike timing, enabling future simulations to explore issues of long-term learning and adaptation in large-scale networks.


Author(s):  
David Forbes ◽  
Gary Page ◽  
Martin Passmore ◽  
Adrian Gaylard

This study is an evaluation of the computational methods in reproducing experimental data for a generic sports utility vehicle (SUV) geometry and an assessment on the influence of fixed and rotating wheels for this geometry. Initially, comparisons are made in the wake structure and base pressures between several CFD codes and experimental data. It was shown that steady-state RANS methods are unsuitable for this geometry due to a large scale unsteadiness in the wake caused by separation at the sharp trailing edge and rear wheel wake interactions. unsteady RANS (URANS) offered no improvements in wake prediction despite a significant increase in computational cost. The detached-eddy simulation (DES) and Lattice–Boltzmann methods showed the best agreement with the experimental results in both the wake structure and base pressure, with LBM running in approximately a fifth of the time for DES. The study then continues by analysing the influence of rotating wheels and a moving ground plane over a fixed wheel and ground plane arrangement. The introduction of wheel rotation and a moving ground was shown to increase the base pressure and reduce the drag acting on the vehicle when compared to the fixed case. However, when compared to the experimental standoff case, variations in drag and lift coefficients were minimal but misleading, as significant variations to the surface pressures were present.


2013 ◽  
Vol 2013 ◽  
pp. 1-10
Author(s):  
Lei Luo ◽  
Chao Zhang ◽  
Yongrui Qin ◽  
Chunyuan Zhang

With the explosive growth of the data volume in modern applications such as web search and multimedia retrieval, hashing is becoming increasingly important for efficient nearest neighbor (similar item) search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.


2006 ◽  
Vol 04 (03) ◽  
pp. 639-647 ◽  
Author(s):  
ELEAZAR ESKIN ◽  
RODED SHARAN ◽  
ERAN HALPERIN

The common approaches for haplotype inference from genotype data are targeted toward phasing short genomic regions. Longer regions are often tackled in a heuristic manner, due to the high computational cost. Here, we describe a novel approach for phasing genotypes over long regions, which is based on combining information from local predictions on short, overlapping regions. The phasing is done in a way, which maximizes a natural maximum likelihood criterion. Among other things, this criterion takes into account the physical length between neighboring single nucleotide polymorphisms. The approach is very efficient and is applied to several large scale datasets and is shown to be successful in two recent benchmarking studies (Zaitlen et al., in press; Marchini et al., in preparation). Our method is publicly available via a webserver at .


Sign in / Sign up

Export Citation Format

Share Document