About an analytical verification of quasi-continuum methods with Γ-convergence techniques

2013 ◽  
Vol 1535 ◽  
Author(s):  
Mathias Schäffner ◽  
Anja Schlömerkemper

ABSTRACTQuasi-continuum (QC) methods are computational techniques, which reduce the complexity of atomistic simulations in a static setting while keeping information on small-scale structures and effects. The main idea is to couple atomistic and continuum models and thus to obtain quite detailed but still not too expensive numerical simulations.We aim at a mathematically rigorous verification of QC methods by means of discrete to continuum limits. In this article we present our first results for the so-called quasi-nonlocal QC method in the context of fracture mechanics. To this end we start from a one-dimensional chain of atoms with nearest and next-to-nearest neighbour interactions of Lennard-Jones type. This is considered as a fully atomistic model of which the Γ-limits (of zeroth and first order) for an infinite number of atoms are known [7].The QC models we construct are equal to this fully atomistic model in the atomistic region; in the continuum regime we approximate the next-to-nearest neighbour interactions by some nearest neighbour potential which is related to the so-called Cauchy-Born rule. Further we choose certain representative atoms in order to coarsen the mesh in the continuum region. It turns out that the selection of the representative atoms is crucial and influences the Γ-limits.We regard a QC model as good if the Γ-limits of zeroth and first order or at least their minimal values and minimizers are the same as those of the fully atomistic model. Our analysis shows that, while in an elastic regime only the size of the atomistic region matters, in the case of fracture a proper choice of the representative atoms is an essential ingredient.

2020 ◽  
Vol 11 (1) ◽  
pp. 1-21
Author(s):  
Bastiaan Bruinsma

AbstractWhile the design of voting advice applications (VAAs) is witnessing an increasing amount of attention, one aspect has until now been overlooked: its visualisations. This is remarkable, as it are those visualisations that communicate to the user the advice of the VAA. Therefore, this article aims to provide a first look at which visualisations VAAs adopt, why they adopt them, and how users comprehend them. For this, I will look at how design choices, specifically those on matching, influence the type of visualisation VAAs not only do but also have to, use. Second, I will report the results of a small-scale experiment that looked if all users comprehend similar visualisations in the same way. Here, I find that this is often not the case and that the interpretations of the users often differ. These first results suggest that VAA visualisations are wrongly underappreciated and demand closer attention of VAA designers.


2009 ◽  
Vol 137 (10) ◽  
pp. 3339-3350 ◽  
Author(s):  
Ramachandran D. Nair

Abstract A second-order diffusion scheme is developed for the discontinuous Galerkin (DG) global shallow-water model. The shallow-water equations are discretized on the cubed sphere tiled with quadrilateral elements relying on a nonorthogonal curvilinear coordinate system. In the viscous shallow-water model the diffusion terms (viscous fluxes) are approximated with two different approaches: 1) the element-wise localized discretization without considering the interelement contributions and 2) the discretization based on the local discontinuous Galerkin (LDG) method. In the LDG formulation the advection–diffusion equation is solved as a first-order system. All of the curvature terms resulting from the cubed-sphere geometry are incorporated into the first-order system. The effectiveness of each diffusion scheme is studied using the standard shallow-water test cases. The approach of element-wise localized discretization of the diffusion term is easy to implement but found to be less effective, and with relatively high diffusion coefficients, it can adversely affect the solution. The shallow-water tests show that the LDG scheme converges monotonically and that the rate of convergence is dependent on the coefficient of diffusion. Also the LDG scheme successfully eliminates small-scale noise, and the simulated results are smooth and comparable to the reference solution.


2006 ◽  
Vol 13 (2) ◽  
pp. 205-222 ◽  
Author(s):  
G. V. Levina ◽  
I. A. Burylov

Abstract. A numerical approach is substantiated for searching for the large-scale alpha-like instability in thermoconvective turbulence. The main idea of the search strategy is the application of a forcing function which can have a physical interpretation. The forcing simulates the influence of small-scale helical turbulence generated in a rotating fluid with internal heat sources and is applied to naturally induced fully developed convective flows. The strategy is tested using the Rayleigh-Bénard convection in an extended horizontal layer of incompressible fluid heated from below. The most important finding is an enlargement of the typical horizontal scale of the forming helical convective structures accompanied by a cells merging, an essential increase in the kinetic energy of flows and intensification of heat transfer. The results of modeling allow explaining how the helical feedback can work providing the non-zero mean helicity generation and the mutual intensification of horizontal and vertical circulation, and demonstrate how the energy of the additional helical source can be effectively converted into the energy of intensive large-scale vortex flow.


Author(s):  
Marco A. P. Rosas ◽  
Ana Paula F. Souza ◽  
Marcos V. Rodrigues ◽  
Danilo Machado L. da Silva

In this paper the behavior and the relationship between hydrostatic collapse pressure and diametrically opposed radial compressive force for pipelines were analyzed. This study presents an introduction of a research work aimed to assess the pipeline collapse pressure based on the radial collapse force. Initially the hydrostatic collapse pressure is analyzed, for pipes with different diameter to wall thickness ratio (D/t) and ovalities, using classical assessment (DNV method) and numerical models (FE). Then, the compressive radial force is also analyzed using numerical models validated by a small-scale ring specimen test. After that, the relationship between hydrostatic collapse pressure and compressive radial force is discussed. These first results show that the radial force is a quadratic function of the collapse pressure.


2020 ◽  
Vol 8 (4) ◽  
pp. 256-269 ◽  
Author(s):  
Maximilian S. T. Wanner

Many suggestions have been made on what motivates countries to expand their measures for disaster risk reduction (DRR), including the frequency and severity of natural hazards, accountability mechanisms, and governance capacity. Despite the fact that theoretical arguments have been developed and evidence collected from small-scale case studies, few studies have attempted to explain the substantial variation in the adoption of DRR measures across countries. This study combines available data on DRR measures, natural hazard events, governance, and socioeconomic characteristics to provide a systematic assessment of the changes that have occurred in the state of DRR at the national level. In line with theoretical explanations, there are indeed associations between several measures of frequency and severity and the development of DRR status. Additionally, voice and accountability mechanisms, as well as development aid, might facilitate positive change. Although these first results of a global comparative study on change in DRR have to be taken cautiously, it is a step forward to understanding the drivers of change at the national level.


2021 ◽  
Vol 94 (3) ◽  
Author(s):  
Gesualdo Delfino

AbstractThe two-dimensional case occupies a special position in the theory of critical phenomena due to the exact results provided by lattice solutions and, directly in the continuum, by the infinite-dimensional character of the conformal algebra. However, some sectors of the theory, and most notably criticality in systems with quenched disorder and short-range interactions, have appeared out of reach of exact methods and lacked the insight coming from analytical solutions. In this article, we review recent progress achieved implementing conformal invariance within the particle description of field theory. The formalism yields exact unitarity equations whose solutions classify critical points with a given symmetry. It provides new insight in the case of pure systems, as well as the first exact access to criticality in presence of short range quenched disorder. Analytical mechanisms emerge that in the random case allow the superuniversality of some critical exponents and make explicit the softening of first-order transitions by disorder.Graphic abstract


2019 ◽  
Author(s):  
Marc Schleiss

Abstract. Spatial downscaling of rainfall fields is a challenging mathematical problem for which many different types of methods have been proposed. One popular solution consists in redistributing rainfall amounts over smaller and smaller scales by means of a discrete multiplicative random cascade (DMRC). This works well for slowly varying, homogeneous rainfall fields but often fails in the presence of intermittency (i.e., large amounts of zero rainfall values). The most common workaround in this case is to use two separate cascade models, one for the occurrence and another for the intensity. In this paper, a new and simpler approach based on the notion of equal-volume areas (EVAs) is proposed. Unlike classical cascades where rainfall amounts are redistributed over grid cells of equal size, the EVA cascade splits grid cells into areas of different sizes, each of them containing exactly half of the original amount of water. The relative areas of the sub-grid cells are determined by drawing random values from a logit-normal cascade generator model with scale and intensity dependent standard deviation. The process ends when the amount of water in each sub-grid cell is smaller than a fixed bucket capacity, at which point the output of the cascade can be re-sampled over a regular Cartesian mesh. The present paper describes the implementation of the EVA cascade model and gives some first results for 100 selected events in the Netherlands. Performance is assessed by comparing the outputs of the EVA model to bilinear interpolation and to a classical DMRC model based on fixed grid cell sizes. Results show that on average, the EVA cascade outperforms the classical method, producing fields with more realistic distributions, small-scale extremes and spatial structures. Improvements are mostly credited to the higher robustness of the EVA model to the presence of intermittency and to the lower variance of its generator. However, improvements are not systematic and both approaches have their advantages and weaknesses. For example, while the classical cascade tends to overestimate small-scale extremes and variability, the EVA model tends to produce fields that are slightly too smooth and blocky compared with observations.


2012 ◽  
Vol 2012 ◽  
pp. 1-19 ◽  
Author(s):  
Guido Sciavicco

The role of time in artificial intelligence is extremely important. Interval-based temporal reasoning can be seen as a generalization of the classical point-based one, and the first results in this field date back to Hamblin (1972) and Benhtem (1991) from the philosophical point of view, to Allen (1983) from the algebraic and first-order one, and to Halpern and Shoham (1991) from the modal logic one. Without purporting to provide a comprehensive survey of the field, we take the reader to a journey through the main developments in modal and first-order interval temporal reasoning over the past ten years and outline some landmark results on expressiveness and (un)decidability of the satisfiability problem for the family of modal interval logics.


2020 ◽  
Vol 239 ◽  
pp. 11005 ◽  
Author(s):  
M. Diakaki ◽  
S. Chen ◽  
G. Noguere ◽  
D. Bernard ◽  
P. Tamagno ◽  
...  

The evaluation of the 56Fe neutron induced reactions is currently ongoing at the CEA-Cadarache using the code CONRAD, with the goal to cover the whole energy range from the Resolved Resonance Region to the continuum and estimate the corresponding uncertainties and covariance matrices. Some first results and issues occurred from this work are presented and discussed here, more specifically on the analysed transmission and capture data in the Resolved Resonance Region, as well as the optical and statistical model calculations in the fast neutron energy range.


Energies ◽  
2021 ◽  
Vol 14 (22) ◽  
pp. 7776
Author(s):  
Andrzej Urbaniec ◽  
Anna Łaba-Biel ◽  
Anna Kwietniak ◽  
Imoleayo Fashagba

The Upper Cretaceous complex in the central part of the Carpathian Foreland (southern Poland) is relatively poorly recognized and described. Its formations can be classified as unconventional reservoir due to poor reservoir properties as well as a low recovery factor. The main aim of the article is to expand knowledge with conclusions resulting from the analysis of the latest seismic data with the application of seismic sequence stratigraphy. Moreover, the seismic attributes analysis was utilized. The depositional architecture recognition based on both chronostratigraphic horizons and Wheeler diagram interpretations was of paramount importance. A further result was the possibility of using the chronostratigraphic image for tectonostratigraphic interpretation. Two distinguished tectonostratigraphic units corresponding to megasequences were recognized. A tectonic setting of the analyzed interval is associated with global processes noticed by other authors in other parts of the central European Late Cretaceous basin, but also locally accompanied by evidence of small-scale tectonics. This study fills the gap on the issue of paleogeography in the Late Cretaceous sedimentary basin of the Carpathian Foreland. It presents the first results of detailed reconstruction of the basin paleogeography and an attempt to determine the impact of both eustatic and tectonic factors on sedimentation processes.


Sign in / Sign up

Export Citation Format

Share Document