scholarly journals Convergence Rates for Inverse Problems with Impulsive Noise

2014 ◽  
Vol 52 (3) ◽  
pp. 1203-1221 ◽  
Author(s):  
Thorsten Hohage ◽  
Frank Werner
2016 ◽  
Vol 54 (1) ◽  
pp. 341-360 ◽  
Author(s):  
Claudia König ◽  
Frank Werner ◽  
Thorsten Hohage

2016 ◽  
Vol 24 (4) ◽  
Author(s):  
Anatoly Bakushinsky ◽  
Alexandra Smirnova

AbstractA series of recent numerical experiments for parameter estimation inverse problems in epidemiology [


2019 ◽  
Vol 62 (3) ◽  
pp. 445-455
Author(s):  
Johannes Schwab ◽  
Stephan Antholzer ◽  
Markus Haltmeier

Abstract Deep learning and (deep) neural networks are emerging tools to address inverse problems and image reconstruction tasks. Despite outstanding performance, the mathematical analysis for solving inverse problems by neural networks is mostly missing. In this paper, we introduce and rigorously analyze families of deep regularizing neural networks (RegNets) of the form $$\mathbf {B}_\alpha + \mathbf {N}_{\theta (\alpha )} \mathbf {B}_\alpha $$Bα+Nθ(α)Bα, where $$\mathbf {B}_\alpha $$Bα is a classical regularization and the network $$\mathbf {N}_{\theta (\alpha )} \mathbf {B}_\alpha $$Nθ(α)Bα is trained to recover the missing part $${\text {Id}}_X - \mathbf {B}_\alpha $$IdX-Bα not found by the classical regularization. We show that these regularizing networks yield a convergent regularization method for solving inverse problems. Additionally, we derive convergence rates (quantitative error estimates) assuming a sufficient decay of the associated distance function. We demonstrate that our results recover existing convergence and convergence rates results for filter-based regularization methods as well as the recently introduced null space network as special cases. Numerical results are presented for a tomographic sparse data problem, which clearly demonstrate that the proposed RegNets improve classical regularization as well as the null space network.


2015 ◽  
Vol 36 (5) ◽  
pp. 549-566 ◽  
Author(s):  
Roman Andreev ◽  
Peter Elbau ◽  
Maarten V. de Hoop ◽  
Lingyun Qiu ◽  
Otmar Scherzer

2022 ◽  
Vol 41 (1) ◽  
pp. 1-10
Author(s):  
Jonas Zehnder ◽  
Stelian Coros ◽  
Bernhard Thomaszewski

We present a sparse Gauss-Newton solver for accelerated sensitivity analysis with applications to a wide range of equilibrium-constrained optimization problems. Dense Gauss-Newton solvers have shown promising convergence rates for inverse problems, but the cost of assembling and factorizing the associated matrices has so far been a major stumbling block. In this work, we show how the dense Gauss-Newton Hessian can be transformed into an equivalent sparse matrix that can be assembled and factorized much more efficiently. This leads to drastically reduced computation times for many inverse problems, which we demonstrate on a diverse set of examples. We furthermore show links between sensitivity analysis and nonlinear programming approaches based on Lagrange multipliers and prove equivalence under specific assumptions that apply for our problem setting.


2018 ◽  
Vol 26 (2) ◽  
pp. 277-286 ◽  
Author(s):  
Jens Flemming

AbstractVariational source conditions proved to be useful for deriving convergence rates for Tikhonov’s regularization method and also for other methods. Up to now, such conditions have been verified only for few examples or for situations which can be also handled by classical range-type source conditions. Here we show that for almost every ill-posed inverse problem variational source conditions are satisfied. Whether linear or nonlinear, whether Hilbert or Banach spaces, whether one or multiple solutions, variational source conditions are a universal tool for proving convergence rates.


Sign in / Sign up

Export Citation Format

Share Document