scholarly journals Sparse quadrature for high-dimensional integration with Gaussian measure

2018 ◽  
Vol 52 (2) ◽  
pp. 631-657 ◽  
Author(s):  
Peng Chen

In this work we analyze the dimension-independent convergence property of an abstract sparse quadrature scheme for numerical integration of functions of high-dimensional parameters with Gaussian measure. Under certain assumptions on the exactness and boundedness of univariate quadrature rules as well as on the regularity assumptions on the parametric functions with respect to the parameters, we prove that the convergence of the sparse quadrature error is independent of the number of the parameter dimensions. Moreover, we propose both an a priori and an a posteriori schemes for the construction of a practical sparse quadrature rule and perform numerical experiments to demonstrate their dimension-independent convergence rates.

2020 ◽  
Vol 54 (4) ◽  
pp. 1259-1307
Author(s):  
Jakob Zech ◽  
Christoph Schwab

We analyse convergence rates of Smolyak integration for parametric maps u: U → X taking values in a Banach space X, defined on the parameter domain U = [−1,1]N. For parametric maps which are sparse, as quantified by summability of their Taylor polynomial chaos coefficients, dimension-independent convergence rates superior to N-term approximation rates under the same sparsity are achievable. We propose a concrete Smolyak algorithm to a priori identify integrand-adapted sets of active multiindices (and thereby unisolvent sparse grids of quadrature points) via upper bounds for the integrands’ Taylor gpc coefficients. For so-called “(b,ε)-holomorphic” integrands u with b∈lp(∕) for some p ∈ (0, 1), we prove the dimension-independent convergence rate 2/p − 1 in terms of the number of quadrature points. The proposed Smolyak algorithm is proved to yield (essentially) the same rate in terms of the total computational cost for both nested and non-nested univariate quadrature points. Numerical experiments and a mathematical sparsity analysis accounting for cancellations in quadratures and in the combination formula demonstrate that the asymptotic rate 2/p − 1 is realized computationally for a moderate number of quadrature points under certain circumstances. By a refined analysis of model integrand classes we show that a generally large preasymptotic range otherwise precludes reaching the asymptotic rate 2/p − 1 for practically relevant numbers of quadrature points.


Mathematics ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 222
Author(s):  
Juan C. Laria ◽  
M. Carmen Aguilera-Morillo ◽  
Enrique Álvarez ◽  
Rosa E. Lillo ◽  
Sara López-Taruella ◽  
...  

Over the last decade, regularized regression methods have offered alternatives for performing multi-marker analysis and feature selection in a whole genome context. The process of defining a list of genes that will characterize an expression profile remains unclear. It currently relies upon advanced statistics and can use an agnostic point of view or include some a priori knowledge, but overfitting remains a problem. This paper introduces a methodology to deal with the variable selection and model estimation problems in the high-dimensional set-up, which can be particularly useful in the whole genome context. Results are validated using simulated data and a real dataset from a triple-negative breast cancer study.


Author(s):  
Muhammad Hassan ◽  
Benjamin Stamm

In this article, we analyse an integral equation of the second kind that represents the solution of N interacting dielectric spherical particles undergoing mutual polarisation. A traditional analysis can not quantify the scaling of the stability constants- and thus the approximation error- with respect to the number N of involved dielectric spheres. We develop a new a priori error analysis that demonstrates N-independent stability of the continuous and discrete formulations of the integral equation. Consequently, we obtain convergence rates that are independent of N.


2018 ◽  
Vol 39 (4) ◽  
pp. 2096-2134 ◽  
Author(s):  
Charles-Edouard Bréhier ◽  
Jianbo Cui ◽  
Jialin Hong

Abstract This article analyses an explicit temporal splitting numerical scheme for the stochastic Allen–Cahn equation driven by additive noise in a bounded spatial domain with smooth boundary in dimension $d\leqslant 3$. The splitting strategy is combined with an exponential Euler scheme of an auxiliary problem. When $d=1$ and the driving noise is a space–time white noise we first show some a priori estimates of this splitting scheme. Using the monotonicity of the drift nonlinearity we then prove that under very mild assumptions on the initial data this scheme achieves the optimal strong convergence rate $\mathcal{O}(\delta t^{\frac 14})$. When $d\leqslant 3$ and the driving noise possesses some regularity in space we study exponential integrability properties of the exact and numerical solutions. Finally, in dimension $d=1$, these properties are used to prove that the splitting scheme has a strong convergence rate $\mathcal{O}(\delta t)$.


Author(s):  
Cyrus Shaoul ◽  
Chris Westbury

HAL (Hyperspace Analog to Language) is a high-dimensional model of semantic space that uses the global co-occurrence frequency of words in a large corpus of text as the basis for a representation of semantic memory. In the original HAL model, many parameters were set without any a priori rationale. In this chapter we describe a new computer application called the High Dimensional Explorer (HiDEx) that makes it possible to systematically alter the values of the model’s parameters and thereby to examine their effect on the co-occurrence matrix that instantiates the model. New parameter sets give us measures of semantic density that improve the model’s ability to predict behavioral measures. Implications for such models are discussed.


2019 ◽  
Vol 144 (3) ◽  
pp. 585-614
Author(s):  
Joscha Gedicke ◽  
Arbaz Khan

AbstractIn this paper, we present a divergence-conforming discontinuous Galerkin finite element method for Stokes eigenvalue problems. We prove a priori error estimates for the eigenvalue and eigenfunction errors and present a residual based a posteriori error estimator. The a posteriori error estimator is proven to be reliable and (locally) efficient. We finally present some numerical examples that verify the a priori convergence rates and the reliability and efficiency of the residual based a posteriori error estimator.


Mathematics ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 331
Author(s):  
Bernd Hofmann ◽  
Christopher Hofmann

This paper deals with the Tikhonov regularization for nonlinear ill-posed operator equations in Hilbert scales with oversmoothing penalties. One focus is on the application of the discrepancy principle for choosing the regularization parameter and its consequences. Numerical case studies are performed in order to complement analytical results concerning the oversmoothing situation. For example, case studies are presented for exact solutions of Hölder type smoothness with a low Hölder exponent. Moreover, the regularization parameter choice using the discrepancy principle, for which rate results are proven in the oversmoothing case in in reference (Hofmann, B.; Mathé, P. Inverse Probl. 2018, 34, 015007) is compared to Hölder type a priori choices. On the other hand, well-known analytical results on the existence and convergence of regularized solutions are summarized and partially augmented. In particular, a sketch for a novel proof to derive Hölder convergence rates in the case of oversmoothing penalties is given, extending ideas from in reference (Hofmann, B.; Plato, R. ETNA. 2020, 93).


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Alexander Dementjev ◽  
Burkhard Hensel ◽  
Klaus Kabitzsch ◽  
Bernd Kauschinger ◽  
Steffen Schroeder

Machine tools are important parts of high-complex industrial manufacturing. Thus, the end product quality strictly depends on the accuracy of these machines, but they are prone to deformation caused by their own heat. The deformation needs to be compensated in order to assure accurate production. So an adequate model of the high-dimensional thermal deformation process must be created and parameters of this model must be evaluated. Unfortunately, such parameters are often unknown and cannot be calculated a priori. Parameter identification during real experiments is not an option for these models because of its high engineering and machine time effort. The installation of additional sensors to measure these parameters directly is uneconomical. Instead, an effective calibration of thermal models can be reached by combining real and virtual measurements on a machine tool during its real operation, without additional sensors installation. In this paper, a new approach for thermal model calibration is presented. The expected results are very promising and can be recommended as an effective solution for this class of problems.


Sign in / Sign up

Export Citation Format

Share Document