Convergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel Learning

2020 ◽  
Vol 58 (6) ◽  
pp. 3504-3529
Author(s):  
Sabrina Guastavino ◽  
Federico Benvenuto
2013 ◽  
Vol 2013 ◽  
pp. 1-11
Author(s):  
Nadjib Boussetila ◽  
Salim Hamida ◽  
Faouzia Rebbani

We study an abstract elliptic Cauchy problem associated with an unbounded self-adjoint positive operator which has a continuous spectrum. It is well-known that such a problem is severely ill-posed; that is, the solution does not depend continuously on the Cauchy data. We propose two spectral regularization methods to construct an approximate stable solution to our original problem. Finally, some other convergence results including some explicit convergence rates are also established under a priori bound assumptions on the exact solution.


2016 ◽  
Vol 54 (1) ◽  
pp. 341-360 ◽  
Author(s):  
Claudia König ◽  
Frank Werner ◽  
Thorsten Hohage

2008 ◽  
Vol 8 (3) ◽  
pp. 279-293 ◽  
Author(s):  
M.T. NAIR ◽  
U. TAUTENHAHN

AbstractFor solving linear ill-posed problems with noisy data regularization methods are required. We analyze a simplified regularization scheme in Hilbert scales for operator equations with nonnegative self-adjoint operators. By exploiting the op-erator monotonicity of certain functions, order-optimal error bounds are derived that characterize the accuracy of the regularized approximations. These error bounds have been obtained under general smoothness conditions.


2018 ◽  
Vol 26 (2) ◽  
pp. 277-286 ◽  
Author(s):  
Jens Flemming

AbstractVariational source conditions proved to be useful for deriving convergence rates for Tikhonov’s regularization method and also for other methods. Up to now, such conditions have been verified only for few examples or for situations which can be also handled by classical range-type source conditions. Here we show that for almost every ill-posed inverse problem variational source conditions are satisfied. Whether linear or nonlinear, whether Hilbert or Banach spaces, whether one or multiple solutions, variational source conditions are a universal tool for proving convergence rates.


2008 ◽  
Vol 20 (7) ◽  
pp. 1873-1897 ◽  
Author(s):  
L. Lo Gerfo ◽  
L. Rosasco ◽  
F. Odone ◽  
E. De Vito ◽  
A. Verri

We discuss how a large class of regularization methods, collectively known as spectral regularization and originally designed for solving ill-posed inverse problems, gives rise to regularized learning algorithms. All of these algorithms are consistent kernel methods that can be easily implemented. The intuition behind their derivation is that the same principle allowing for the numerical stabilization of a matrix inversion problem is crucial to avoid overfitting. The various methods have a common derivation but different computational and theoretical properties. We describe examples of such algorithms, analyze their classification performance on several data sets and discuss their applicability to real-world problems.


Acta Numerica ◽  
2018 ◽  
Vol 27 ◽  
pp. 1-111 ◽  
Author(s):  
Martin Benning ◽  
Martin Burger

Regularization methods are a key tool in the solution of inverse problems. They are used to introduce prior knowledge and allow a robust approximation of ill-posed (pseudo-) inverses. In the last two decades interest has shifted from linear to nonlinear regularization methods, even for linear inverse problems. The aim of this paper is to provide a reasonably comprehensive overview of this shift towards modern nonlinear regularization methods, including their analysis, applications and issues for future research.In particular we will discuss variational methods and techniques derived from them, since they have attracted much recent interest and link to other fields, such as image processing and compressed sensing. We further point to developments related to statistical inverse problems, multiscale decompositions and learning theory.


2015 ◽  
Vol 15 (3) ◽  
pp. 279-289 ◽  
Author(s):  
Jens Flemming ◽  
Bernd Hofmann ◽  
Ivan Veselić

AbstractBased on the powerful tool of variational inequalities, in recent papers convergence rates results on ℓ1-regularization for ill-posed inverse problems have been formulated in infinite dimensional spaces under the condition that the sparsity assumption slightly fails, but the solution is still in ℓ1. In the present paper, we improve those convergence rates results and apply them to the Cesáro operator equation in ℓ2 and to specific denoising problems. Moreover, we formulate in this context relationships between Nashed's types of ill-posedness and mapping properties like compactness and strict singularity.


Sign in / Sign up

Export Citation Format

Share Document