scholarly journals Microstructural Dynamic Study of Grain Growth

1985 ◽  
Vol 63 ◽  
Author(s):  
M. P. Anderson ◽  
G. S. Grest ◽  
D. J. Srolovitz

The complete prediction of microstructural development in polycrystalline solids as a function of time and temperature is a major objective in materials science, but has not yet been possible primarily due to the complexity of the grain interactions. The evolution of the polycrystalline structure depends upon the precise specification of the coordinates of the grain boundary network, the crystallographic orientations of the grains, and the postulated microscopic mechanisms by which elements of the boundaries are assumed to move. Therefore, a general analytical solution to this multivariate problem has not yet been developed. Recently, we have been able to successfully incorporate these aspects of the grain interactions, and have developed a computer model which predicts the main features of the microstructure from first principles [1,2]., The polycrystal is mapped onto a discrete lattice by dividing the material into small area (2d) or volume (3d) elements, and placing the centers of these elements on lattice points. Interactions and dynamics are then defined for the individual elements which are analagous to those postulated in continuous systems. This discrete model preserves the topological features of real materials, and can be studied by computer simulation using Monte Carlo techniques. In this paper we report the application of the Monte Carlo method to the metallurgical phenomenon of grain growth with isothermal annealing. Extension of the model to treat primary recrystallization is presented elsewhere [3,4].

2004 ◽  
Vol 467-470 ◽  
pp. 1075-1080 ◽  
Author(s):  
F. Lin ◽  
Andrew Godfrey ◽  
Mark A. Miodownik ◽  
Qing Liu

After primary recrystallization of highly rolled (>98% reduction) high purity Ni (99.999%) tapes the cube texture fraction can range from 45 - 65%. Annealing at temperatures >1000oC leads to cube texture volume fractions of >95% as a result of grain growth. A Monte Carlo Potts model was used to simulate this annealing process. The starting microstructures for the simulations were generated from experimental data taken using electron backscatter pattern analysis. The simulation results suggest that in addition to the grain boundary misorientation and energy functions used, the misorientation texture and grain sizes are also determining factors in the grain growth process. As the grain size after recrystallization is comparable to the tape thickness, the surface energy of the grains may also be an important factor. Simulations were therefore also carried using a surface energy term. If the cube grains have a lower surface energy then a stronger cube texture is predicted.


Author(s):  
Edward P. Herbst ◽  
Frank Schorfheide

Dynamic stochastic general equilibrium (DSGE) models have become one of the workhorses of modern macroeconomics and are extensively used for academic research as well as forecasting and policy analysis at central banks. This book introduces readers to state-of-the-art computational techniques used in the Bayesian analysis of DSGE models. The book covers Markov chain Monte Carlo techniques for linearized DSGE models, novel sequential Monte Carlo methods that can be used for parameter inference, and the estimation of nonlinear DSGE models based on particle filter approximations of the likelihood function. The theoretical foundations of the algorithms are discussed in depth, and detailed empirical applications and numerical illustrations are provided. The book also gives invaluable advice on how to tailor these algorithms to specific applications and assess the accuracy and reliability of the computations. The book is essential reading for graduate students, academic researchers, and practitioners at policy institutions.


2014 ◽  
Vol 6 (1) ◽  
pp. 1006-1015
Author(s):  
Negin Shagholi ◽  
Hassan Ali ◽  
Mahdi Sadeghi ◽  
Arjang Shahvar ◽  
Hoda Darestani ◽  
...  

Medical linear accelerators, besides the clinically high energy electron and photon beams, produce other secondary particles such as neutrons which escalate the delivered dose. In this study the neutron dose at 10 and 18MV Elekta linac was obtained by using TLD600 and TLD700 as well as Monte Carlo simulation. For neutron dose assessment in 2020 cm2 field, TLDs were calibrated at first. Gamma calibration was performed with 10 and 18 MV linac and neutron calibration was done with 241Am-Be neutron source. For simulation, MCNPX code was used then calculated neutron dose equivalent was compared with measurement data. Neutron dose equivalent at 18 MV was measured by using TLDs on the phantom surface and depths of 1, 2, 3.3, 4, 5 and 6 cm. Neutron dose at depths of less than 3.3cm was zero and maximized at the depth of 4 cm (44.39 mSvGy-1), whereas calculation resulted  in the maximum of 2.32 mSvGy-1 at the same depth. Neutron dose at 10 MV was measured by using TLDs on the phantom surface and depths of 1, 2, 2.5, 3.3, 4 and 5 cm. No photoneutron dose was observed at depths of less than 3.3cm and the maximum was at 4cm equal to 5.44mSvGy-1, however, the calculated data showed the maximum of 0.077mSvGy-1 at the same depth. The comparison between measured photo neutron dose and calculated data along the beam axis in different depths, shows that the measurement data were much more than the calculated data, so it seems that TLD600 and TLD700 pairs are not suitable dosimeters for neutron dosimetry in linac central axis due to high photon flux, whereas MCNPX Monte Carlo techniques still remain a valuable tool for photonuclear dose studies.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 662
Author(s):  
Mateu Sbert ◽  
Jordi Poch ◽  
Shuning Chen ◽  
Víctor Elvira

In this paper, we present order invariance theoretical results for weighted quasi-arithmetic means of a monotonic series of numbers. The quasi-arithmetic mean, or Kolmogorov–Nagumo mean, generalizes the classical mean and appears in many disciplines, from information theory to physics, from economics to traffic flow. Stochastic orders are defined on weights (or equivalently, discrete probability distributions). They were introduced to study risk in economics and decision theory, and recently have found utility in Monte Carlo techniques and in image processing. We show in this paper that, if two distributions of weights are ordered under first stochastic order, then for any monotonic series of numbers their weighted quasi-arithmetic means share the same order. This means for instance that arithmetic and harmonic mean for two different distributions of weights always have to be aligned if the weights are stochastically ordered, this is, either both means increase or both decrease. We explore the invariance properties when convex (concave) functions define both the quasi-arithmetic mean and the series of numbers, we show its relationship with increasing concave order and increasing convex order, and we observe the important role played by a new defined mirror property of stochastic orders. We also give some applications to entropy and cross-entropy and present an example of multiple importance sampling Monte Carlo technique that illustrates the usefulness and transversality of our approach. Invariance theorems are useful when a system is represented by a set of quasi-arithmetic means and we want to change the distribution of weights so that all means evolve in the same direction.


1999 ◽  
Vol 72 (1) ◽  
pp. 68-72
Author(s):  
M. Yu. Al’es ◽  
A. I. Varnavskii ◽  
S. P. Kopysov

2020 ◽  
Vol 26 (3) ◽  
pp. 223-244
Author(s):  
W. John Thrasher ◽  
Michael Mascagni

AbstractIt has been shown that when using a Monte Carlo algorithm to estimate the electrostatic free energy of a biomolecule in a solution, individual random walks can become entrapped in the geometry. We examine a proposed solution, using a sharp restart during the Walk-on-Subdomains step, in more detail. We show that the point at which this solution introduces significant bias is related to properties intrinsic to the molecule being examined. We also examine two potential methods of generating a sharp restart point and show that they both cause no significant bias in the examined molecules and increase the stability of the run times of the individual walks.


Mathematics ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. 580
Author(s):  
Pavel Shcherbakov ◽  
Mingyue Ding ◽  
Ming Yuchi

Various Monte Carlo techniques for random point generation over sets of interest are widely used in many areas of computational mathematics, optimization, data processing, etc. Whereas for regularly shaped sets such sampling is immediate to arrange, for nontrivial, implicitly specified domains these techniques are not easy to implement. We consider the so-called Hit-and-Run algorithm, a representative of the class of Markov chain Monte Carlo methods, which became popular in recent years. To perform random sampling over a set, this method requires only the knowledge of the intersection of a line through a point inside the set with the boundary of this set. This component of the Hit-and-Run procedure, known as boundary oracle, has to be performed quickly when applied to economy point representation of many-dimensional sets within the randomized approach to data mining, image reconstruction, control, optimization, etc. In this paper, we consider several vector and matrix sets typically encountered in control and specified by linear matrix inequalities. Closed-form solutions are proposed for finding the respective points of intersection, leading to efficient boundary oracles; they are generalized to robust formulations where the system matrices contain norm-bounded uncertainty.


Sign in / Sign up

Export Citation Format

Share Document