A Novel Proper Generalized Decomposition (PGD) Based Approach for Non-Matching Grids

Author(s):  
S. Mohamed Nazeer ◽  
Francisco Chinesta ◽  
Adrien Leygue

Proper Generalized Decomposition (PGD) is a recent model reduction technique, successfully employed to solve many multidimensional problems. This method is able to circumvent, or at least alleviate, the curse of dimensionality. This method is based on the use of separated representations. By avoiding the exponential complexity of standard grid based discretization techniques, the PGD circumvents the curse of dimensionality in a variety of problems. With the PGD, the problem’s usual coordinates (e.g. space, time), but also model parameters, boundary conditions, and other sources of variability can be viewed globally as coordinates of a high-dimensional space wherein an approximate solution can efficiently be computed at once. Non-matching grids are very common in advanced scientific computing (e.g. contact problems, sub-domains coupling,).In this framework, approximate solutions from one grid to a non-matching second grid needs to be projected. This approach poses substantial numerical complexity which increases when going from one to higher dimensional interfaces. In this paper, we try to simulate a domain, which has a coarse mesh on one side and a fine mesh on other side by PGD. We show that PGD can handle these non -matching grids by using a smooth transition of the separated representation description.

Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Rubén Ibáñez ◽  
Emmanuelle Abisset-Chavanne ◽  
Amine Ammar ◽  
David González ◽  
Elías Cueto ◽  
...  

Sparse model identification by means of data is especially cumbersome if the sought dynamics live in a high dimensional space. This usually involves the need for large amount of data, unfeasible in such a high dimensional settings. This well-known phenomenon, coined as the curse of dimensionality, is here overcome by means of the use of separate representations. We present a technique based on the same principles of the Proper Generalized Decomposition that enables the identification of complex laws in the low-data limit. We provide examples on the performance of the technique in up to ten dimensions.


Author(s):  
Mohammad Javad Kazemzadeh-Parsi ◽  
Amine Ammar ◽  
Jean Louis Duval ◽  
Francisco Chinesta

AbstractSpace separation within the Proper Generalized Decomposition—PGD—rationale allows solving high dimensional problems as a sequence of lower dimensional ones. In our former works, different geometrical transformations were proposed for addressing complex shapes and spatially non-separable domains. Efficient implementation of separated representations needs expressing the domain as a product of characteristic functions involving the different space coordinates. In the case of complex shapes, more sophisticated geometrical transformations are needed to map the complex physical domain into a regular one where computations are performed. This paper aims at proposing a very efficient route for accomplishing such space separation. A NURBS-based geometry representation, usual in computer aided design—CAD—, is retained and combined with a fully separated representation for allying efficiency (ensured by the fully separated representations) and generality (by addressing complex geometries). Some numerical examples are considered to prove the potential of the proposed methodology.


2014 ◽  
Vol 611-612 ◽  
pp. 513-520 ◽  
Author(s):  
Diego Canales ◽  
Elias Cueto ◽  
Eric Feulvarch ◽  
Francisco Chinesta

Friction Stir Welding (FSW) is a welding technique the more and more demanded in industry by its multiple advantages. Despite its wide use, its physical foundations and the effect of the process parameters have not been fully elucidated. Numerical simulations are a powerful tool to achieve a greater understanding in the physics of the problem. Although several approaches can be found in the literature for simulating FSW, all of them present different limitations that restrict their applicability in industrial applications. This paper presents a new solution strategy that combines a robust approximation method, based on natural neighborhood interpolation, with a solution separated representation making use of the Proper Generalized Decomposition (PGD), for creating a new 3D updated-Lagrangian strategy for addressing the 3D model while keeping a 2D computational complexity


2017 ◽  
Author(s):  
R U

All exact algorithms for solving subset sum problem (SUBSET\_SUM) are exponential (brute force, branch and bound search, dynamic programming which is pseudo-polynomial). To find the approximate solutions both a classical greedy algorithm and its improved variety, and different approximation schemes are used.This paper is an attempt to build another greedy algorithm by transferring representation of analytic geometry to such an object of discrete structure as a set. Set of size $n$ is identified with $n$-dimensional space with Euclidean metric, the subset-sum is identified with (hyper)plane.


Author(s):  
Angelo Pasquale ◽  
Amine Ammar ◽  
Antonio Falcó ◽  
Simona Perotto ◽  
Elías Cueto ◽  
...  

AbstractSolutions of partial differential equations can exhibit multiple time scales. Standard discretization techniques are constrained to capture the finest scale to accurately predict the response of the system. In this paper, we provide an alternative route to circumvent prohibitive meshes arising from the necessity of capturing fine-scale behaviors. The proposed methodology is based on a time-separated representation within the standard Proper Generalized Decomposition, where the time coordinate is transformed into a multi-dimensional time through new separated coordinates, each representing one scale, while continuity is ensured in the scale coupling. For instance, when considering two different time scales, the governing Partial Differential Equation is commuted into a nonlinear system that iterates between the so-called microtime and macrotime, so that the time coordinate can be viewed as a 2D time. The macroscale effects are taken into account by means of a finite element-based macro-discretization, whereas the microscale effects are handled with unidimensional parent spaces that are replicated throughout the time domain. The resulting separated representation allows us a very fine time discretization without impacting the computational efficiency. The proposed formulation is explored and numerically verified on thermal and elastodynamic problems.


2021 ◽  
Author(s):  
Gaurav Chauda ◽  
Daniel J. Segalman

Abstract To obtain detail in elastic, frictional contact problems involving contact many — at least tens, and more suitably hundreds [1] — of nodes are necessary over the contact patch. Generally, this fine discretization results in intractable numbers of system equations that must be solved, but this problem is greatly mitigated when the elasticity of the contacting bodies is represented by elastic compliance matrices rather than stiffness matrices. An examination of the classical analytic expressions for the contact of disks — an example of smooth contact — shows that for most standard engineering metals, such as brass, steel, or titanium, the pressures that would cause more than one degree of arc of contact would push the materials past the elastic limit. The discretization necessary to capture the interface tractions would be on the order of at least tens of nodes. With the resulting boundary integral formulation would involve several hundreds of nodes over the disk, and the corresponding finite element mesh would have tens of thousands. The resulting linear system of equations must be solved at each load step and the numerical problem becomes extremely difficult or intractable. A compliance method of facilitating extremely fine contact patch resolution can be achieved by exploiting Fourier analysis and the Michell solution. The advantages of this compliance method are that only degrees of freedom on the surface are introduced and those not in the region of contact are eliminated from the system of equations to be solved.


2002 ◽  
Vol 12 (05) ◽  
pp. 411-424
Author(s):  
SHOULING HE

In this paper multilayer neural networks (MNNs) are used to control the balancing of a class of inverted pendulums. Unlike normal inverted pendulums, the pendulum discussed here has two degrees of rotational freedom and the base-point moves randomly in three-dimensional space. The goal is to apply control torques to keep the pendulum in a prescribed position in spite of the random movement at the base-point. Since the inclusion of the base-point motion leads to a non-autonomous dynamic system with time-varying parametric excitation, the design of the control system is a challenging task. A feedback control algorithm is proposed that utilizes a set of neural networks to compensate for the effect of the system's nonlinearities. The weight parameters of neural networks updated on-line, according to a learning algorithm that guarantees the Lyapunov stability of the control system. Furthermore, since the base-point movement is considered unmeasurable, a neural inverse model is employed to estimate it from only measured state variables. The estimate is then utilized within the main control algorithm to produce compensating control signals. The examination of the proposed control system, through simulations, demonstrates the promise of the methodology and exhibits positive aspects, which cannot be achieved by the previously developed techniques on the same problem. These aspects include fast, yet well-maintained damped responses with reasonable control torques and no requirement for knowledge of the model or the model parameters. The work presented here can benefit practical problems such as the study of stable locomotion of human upper body and bipedal robots.


2014 ◽  
Vol 2014 ◽  
pp. 1-13
Author(s):  
Nebiye Korkmaz ◽  
Zekeriya Güney

As an approach to approximate solutions of Fredholm integral equations of the second kind, adaptive hp-refinement is used firstly together with Galerkin method and with Sloan iteration method which is applied to Galerkin method solution. The linear hat functions and modified integrated Legendre polynomials are used as basis functions for the approximations. The most appropriate refinement is determined by an optimization problem given by Demkowicz, 2007. During the calculationsL2-projections of approximate solutions on four different meshes which could occur between coarse mesh and fine mesh are calculated. Depending on the error values, these procedures could be repeated consecutively or different meshes could be used in order to decrease the error values.


2020 ◽  
Vol 88 (3) ◽  
Author(s):  
Roberta Massabò

Abstract Upper and lower bounds for the parameters of one-dimensional theories used in the analysis of sandwich fracture specimens are derived by matching the energy release rate with two-dimensional elasticity solutions. The theory of a beam on an elastic foundation and modified beam theory are considered. Bounds are derived analytically for foundation modulus and crack length correction in single cantilever beam (SCB) sandwich specimens and verified using accurate finite element results and experimental data from the literature. Foundation modulus and crack length correction depend on the elastic mismatch between face sheets and core and are independent of the core thickness if this is above a limit value, which also depends on the elastic mismatch. The results in this paper clarify conflicting results in the literature, explain the approximate solutions, and highlight their limitations. The bounds of the model parameters can be applied directly to specimens satisfying specific geometrical/material ratios, which are given in the paper, or used to support and validate numerical calculations and define asymptotic limits.


2013 ◽  
Vol 59 (218) ◽  
pp. 1189-1201 ◽  
Author(s):  
E.A. Podolskiy ◽  
G. Chambon ◽  
M. Naaim ◽  
J. Gaume

The finite-element method (FEM) is one of the main numerical analysis methods in continuum mechanics and mechanics of solids (Huebner and others, 2001). Through mesh discretization of a given continuous domain into a finite number of sub-domains, or elements, the method finds approximate solutions to sets of simultaneous partial differential equations, which express the behavior of the elements and the entire system. For decades this methodology has played an accelerated role in mechanical engineering, structural analysis and, in particular, snow mechanics. To the best of our knowledge, the application of finite-element analysis in snow mechanics has never been summarized. Therefore, in this correspondence we provide a table with a detailed review of the main FEM studies on snow mechanics performed from 1971 to 2012 (40 papers), for facilitating comparison between different mechanical approaches, outlining numerical recipes and for future reference. We believe that this kind of compact review in a tabulated form will produce a snapshot of the state of the art, and thus become an appropriate, timely and beneficial reference for any relevant follow-up research, including, for example, not only snow avalanche questions, but also modeling of snow microstructure and tire–snow interaction. To that end, this correspondence is organized according to the following structure. Table 1 includes all essential information about previously published FEM studies originally developed to investigate stresses in snow with all corresponding mechanical and numerical parameters. Columns in Table 1 provide references to particular studies, placed in chronological order. Rows correspond to the main model parameters and other details of each considered case.


Sign in / Sign up

Export Citation Format

Share Document