Equivalence between Sobolev spaces of first-order dominating mixed smoothness and unanchored ANOVA spaces on ℝ^{𝕕}

2021 ◽  
Author(s):  
Alexander Gilbert ◽  
Frances Kuo ◽  
Ian Sloan
2009 ◽  
Vol 16 (4) ◽  
pp. 667-682
Author(s):  
Markus Hansen ◽  
Jan Vybíral

Abstract We give a proof of the Jawerth embedding for function spaces with dominating mixed smoothness of Besov and Triebel–Lizorkin type where 0 < 𝑝0 < 𝑝1 ≤ ∞ and 0 < 𝑞0,𝑞1 ≤ ∞ and with If 𝑝1 < ∞, we prove also the Franke embedding Our main tools are discretization by a wavelet isomorphism and multivariate rearrangements.


2017 ◽  
Vol 5 (1) ◽  
pp. 98-115 ◽  
Author(s):  
Eero Saksman ◽  
Tomás Soto

Abstract We establish trace theorems for function spaces defined on general Ahlfors regular metric spaces Z. The results cover the Triebel-Lizorkin spaces and the Besov spaces for smoothness indices s < 1, as well as the first order Hajłasz-Sobolev space M1,p(Z). They generalize the classical results from the Euclidean setting, since the traces of these function spaces onto any closed Ahlfors regular subset F ⊂ Z are Besov spaces defined intrinsically on F. Our method employs the definitions of the function spaces via hyperbolic fillings of the underlying metric space.


2001 ◽  
Vol 185 (2) ◽  
pp. 527-563 ◽  
Author(s):  
Fuzhou Gong ◽  
Michael Röckner ◽  
Wu Liming

Author(s):  
David Krieg ◽  
Mario Ullrich

AbstractWe study the $$L_2$$ L 2 -approximation of functions from a Hilbert space and compare the sampling numbers with the approximation numbers. The sampling number $$e_n$$ e n is the minimal worst-case error that can be achieved with n function values, whereas the approximation number $$a_n$$ a n is the minimal worst-case error that can be achieved with n pieces of arbitrary linear information (like derivatives or Fourier coefficients). We show that $$\begin{aligned} e_n \,\lesssim \, \sqrt{\frac{1}{k_n} \sum _{j\ge k_n} a_j^2}, \end{aligned}$$ e n ≲ 1 k n ∑ j ≥ k n a j 2 , where $$k_n \asymp n/\log (n)$$ k n ≍ n / log ( n ) . This proves that the sampling numbers decay with the same polynomial rate as the approximation numbers and therefore that function values are basically as powerful as arbitrary linear information if the approximation numbers are square-summable. Our result applies, in particular, to Sobolev spaces $$H^s_\mathrm{mix}(\mathbb {T}^d)$$ H mix s ( T d ) with dominating mixed smoothness $$s>1/2$$ s > 1 / 2 and dimension $$d\in \mathbb {N}$$ d ∈ N , and we obtain $$\begin{aligned} e_n \,\lesssim \, n^{-s} \log ^{sd}(n). \end{aligned}$$ e n ≲ n - s log sd ( n ) . For $$d>2s+1$$ d > 2 s + 1 , this improves upon all previous bounds and disproves the prevalent conjecture that Smolyak’s (sparse grid) algorithm is optimal.


Sign in / Sign up

Export Citation Format

Share Document