solution manifold
Recently Published Documents


TOTAL DOCUMENTS

26
(FIVE YEARS 7)

H-INDEX

7
(FIVE YEARS 0)

2021 ◽  
Vol 88 (1) ◽  
Author(s):  
Moritz Geist ◽  
Philipp Petersen ◽  
Mones Raslan ◽  
Reinhold Schneider ◽  
Gitta Kutyniok

AbstractWe perform a comprehensive numerical study of the effect of approximation-theoretical results for neural networks on practical learning problems in the context of numerical analysis. As the underlying model, we study the machine-learning-based solution of parametric partial differential equations. Here, approximation theory for fully-connected neural networks predicts that the performance of the model should depend only very mildly on the dimension of the parameter space and is determined by the intrinsic dimension of the solution manifold of the parametric partial differential equation. We use various methods to establish comparability between test-cases by minimizing the effect of the choice of test-cases on the optimization and sampling aspects of the learning problem. We find strong support for the hypothesis that approximation-theoretical effects heavily influence the practical behavior of learning problems in numerical analysis. Turning to practically more successful and modern architectures, at the end of this study we derive improved error bounds by focusing on convolutional neural networks.


Author(s):  
Gitta Kutyniok ◽  
Philipp Petersen ◽  
Mones Raslan ◽  
Reinhold Schneider

AbstractWe derive upper bounds on the complexity of ReLU neural networks approximating the solution maps of parametric partial differential equations. In particular, without any knowledge of its concrete shape, we use the inherent low dimensionality of the solution manifold to obtain approximation rates which are significantly superior to those provided by classical neural network approximation results. Concretely, we use the existence of a small reduced basis to construct, for a large variety of parametric partial differential equations, neural networks that yield approximations of the parametric solution maps in such a way that the sizes of these networks essentially only depend on the size of the reduced basis.


Author(s):  
Hans-Otto Walther

AbstractWe construct a delay functional d on an open subset of the space $$C^1_r=C^1([-r,0],\mathbb {R})$$ C r 1 = C 1 ( [ - r , 0 ] , R ) and find $$h\in (0,r)$$ h ∈ ( 0 , r ) so that the equation $$\begin{aligned} x'(t)=-x(t-d(x_t)) \end{aligned}$$ x ′ ( t ) = - x ( t - d ( x t ) ) defines a continuous semiflow of continuously differentiable solution operators on the solution manifold $$\begin{aligned} X=\{\phi \in C^1_r:\phi '(0)=-\phi (-d(\phi ))\}, \end{aligned}$$ X = { ϕ ∈ C r 1 : ϕ ′ ( 0 ) = - ϕ ( - d ( ϕ ) ) } , and along each solution the delayed argument $$t-d(x_t)$$ t - d ( x t ) is strictly increasing, and there exists a solution whose short segments$$\begin{aligned} x_{t,short}=x(t+\cdot )\in C^2_h,\quad t\ge 0, \end{aligned}$$ x t , s h o r t = x ( t + · ) ∈ C h 2 , t ≥ 0 , are dense in an infinite-dimensional subset of the space $$C^2_h$$ C h 2 . The result supplements earlier work on complicated motion caused by state-dependent delay with oscillatory delayed arguments.


Acta Numerica ◽  
2021 ◽  
Vol 30 ◽  
pp. 445-554
Author(s):  
Omar Ghattas ◽  
Karen Willcox

This article addresses the inference of physics models from data, from the perspectives of inverse problems and model reduction. These fields develop formulations that integrate data into physics-based models while exploiting the fact that many mathematical models of natural and engineered systems exhibit an intrinsically low-dimensional solution manifold. In inverse problems, we seek to infer uncertain components of the inputs from observations of the outputs, while in model reduction we seek low-dimensional models that explicitly capture the salient features of the input–output map through approximation in a low-dimensional subspace. In both cases, the result is a predictive model that reflects data-driven learning yet deeply embeds the underlying physics, and thus can be used for design, control and decision-making, often with quantified uncertainties. We highlight recent developments in scalable and efficient algorithms for inverse problems and model reduction governed by large-scale models in the form of partial differential equations. Several illustrative applications to large-scale complex problems across different domains of science and engineering are provided.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Erez James Cohen ◽  
Kunlin Wei ◽  
Diego Minciacchi

AbstractHow strategies are formulated during a performance is an important aspect of motor control. Knowledge of the strategy employed in a task may help subjects achieve better performances, as it would help to evidence other possible strategies that could be used as well as help perfect a certain strategy. We sought to investigate how much of a performance is conditioned by the initial state and whether behavior throughout the performance is modified within a short timescale. In other words, we focus on the process of execution and not on the outcome. To this scope we used a repeated continuous circle tracing task. Performances were decomposed into different components (i.e., execution variables) whose combination is able to numerically determine movement outcome. By identifying execution variables of speed and duration, we created an execution space and a solution manifold (i.e., combinations of execution variables yielding zero discrepancy from the desired outcome) and divided the subjects according to their initial performance in that space into speed preference, duration preference, and no-preference groups. We demonstrated that specific strategies may be identified in a continuous task, and strategies remain relatively stable throughout the performance. Moreover, as performances remained stable, the initial location in the execution space can be used to determine the subject’s strategy. Finally, contrary to other studies, we demonstrated that, in a continuous task, performances were associated with reduced exploration of the execution space.


Author(s):  
Victor Aldaya ◽  
Julio Guerrero ◽  
Francisco F. López-Ruiz

In this paper, we exploit the formal equivalence of the Solution Manifold of two distinct physical systems to create enough symmetries so as to characterize them by Noether Invariants, thus favoring their future quantization. In so doing, we somehow generalize the Arnold Transformation for non-necessarily linear systems. Very particularly, this algorithm applies to the case of the motion on the de Sitter space-time providing a finite-dimensional algebra generalizing the Heisenberg–Weyl algebra globally. In this case, the basic (contact) symmetry is imported from the motion of a (non-relativistic) particle on the sphere [Formula: see text].


2020 ◽  
Vol 54 (5) ◽  
pp. 1509-1524 ◽  
Author(s):  
Albert Cohen ◽  
Wolfgang Dahmen ◽  
Ronald DeVore ◽  
James Nichols

Reduced bases have been introduced for the approximation of parametrized PDEs in applications where many online queries are required. Their numerical efficiency for such problems has been theoretically confirmed in Binev et al. (SIAM J. Math. Anal. 43 (2011) 1457–1472) and DeVore et al. (Constructive Approximation 37 (2013) 455–466), where it is shown that the reduced basis space Vn of dimension n, constructed by a certain greedy strategy, has approximation error similar to that of the optimal space associated to the Kolmogorov n-width of the solution manifold. The greedy construction of the reduced basis space is performed in an offline stage which requires at each step a maximization of the current error over the parameter space. For the purpose of numerical computation, this maximization is performed over a finite training set obtained through a discretization of the parameter domain. To guarantee a final approximation error ε for the space generated by the greedy algorithm requires in principle that the snapshots associated to this training set constitute an approximation net for the solution manifold with accuracy of order ε. Hence, the size of the training set is the ε covering number for M and this covering number typically behaves like exp(Cε−1/s) for some C > 0 when the solution manifold has n-width decay O(n−s). Thus, the shear size of the training set prohibits implementation of the algorithm when ε is small. The main result of this paper shows that, if one is willing to accept results which hold with high probability, rather than with certainty, then for a large class of relevant problems one may replace the fine discretization by a random training set of size polynomial in ε−1. Our proof of this fact is established by using inverse inequalities for polynomials in high dimensions.


2019 ◽  
Vol 489 (4) ◽  
pp. 347-350
Author(s):  
L. E. Rossovskii ◽  
A. A. Tovsultanov

We study the Dirichlet problem for a functional differential equation containing shifted and contracted argument under the Laplacian sign. We establish conditions for the unique solvability and demonstrate also that the problem may have an infinite dimensional solution manifold.


2019 ◽  
Author(s):  
Nobuyasu Nakano ◽  
Yuki Inaba ◽  
Senshi Fukashiro ◽  
Shinsuke Yoshioka

AbstractHow humans execute accurate movement in the presence of motor noise is a key problem in the field of biomechanics and motor control that limits the performance improvement in daily or sporting activities. The aim of this study was to clarify the strategy of basketball players during free-throw shooting. Two possible hypotheses were examined: the players minimize the release speed to decrease signal-dependent noise or the players maximize the shot success probability by accounting for their variability. Eight collegiate players and one professional player participated in this study by attempting shots from the free-throw line using a motion capture system. The solution manifold consisting of ball parameters at release was calculated and the optimal strategy was simulated by considering ball parameter variability; this result was compared with the actual data. Our results showed that participants selected the solution of near-minimum release speed. The deviation of the measured release angle from the minimum-speed angle was close to zero (2.8 ± 3.1°). However, an increase in speed-dependent noise did not have a significant influence on the ball landing position through simulation. Additionally, the effect of release angle error on the ball landing position was minimum when using the minimum speed strategy. Therefore, the players minimize the release speed to minimize the effect of the release error on performance, instead of minimizing the speed-dependent noise itself. In other words, the strategy is “near-minimum-speed strategy” as well as “minimum-error-propagation strategy”. These findings will be important for understanding how sports experts deal with intrinsic noise to improve performance.


Sign in / Sign up

Export Citation Format

Share Document