Inverse problems in imaging science: from classical regularization methods to state of the art Bayesian methods

Author(s):  
Ali Mohammad-Djafari
2010 ◽  
Author(s):  
Valeriy Titarenko ◽  
Robert Bradley ◽  
Christopher Martin ◽  
Philip J. Withers ◽  
Sofya Titarenko

Author(s):  
Risheng Liu

Numerous tasks at the core of statistics, learning, and vision areas are specific cases of ill-posed inverse problems. Recently, learning-based (e.g., deep) iterative methods have been empirically shown to be useful for these problems. Nevertheless, integrating learnable structures into iterations is still a laborious process, which can only be guided by intuitions or empirical insights. Moreover, there is a lack of rigorous analysis of the convergence behaviors of these reimplemented iterations, and thus the significance of such methods is a little bit vague. We move beyond these limits and propose a theoretically guaranteed optimization learning paradigm, a generic and provable paradigm for nonconvex inverse problems, and develop a series of convergent deep models. Our theoretical analysis reveals that the proposed optimization learning paradigm allows us to generate globally convergent trajectories for learning-based iterative methods. Thanks to the superiority of our framework, we achieve state-of-the-art performance on different real applications.


2009 ◽  
Vol 25 (3) ◽  
pp. 035004 ◽  
Author(s):  
A DeCezaro ◽  
A Leitão ◽  
X-C Tai

2017 ◽  
Vol 5 ◽  
pp. 179-189 ◽  
Author(s):  
Ryo Fujii ◽  
Ryo Domoto ◽  
Daichi Mochihashi

This paper presents a novel hybrid generative/discriminative model of word segmentation based on nonparametric Bayesian methods. Unlike ordinary discriminative word segmentation which relies only on labeled data, our semi-supervised model also leverages a huge amounts of unlabeled text to automatically learn new “words”, and further constrains them by using a labeled data to segment non-standard texts such as those found in social networking services. Specifically, our hybrid model combines a discriminative classifier (CRF; Lafferty et al. (2001) and unsupervised word segmentation (NPYLM; Mochihashi et al. (2009)), with a transparent exchange of information between these two model structures within the semi-supervised framework (JESS-CM; Suzuki and Isozaki (2008)). We confirmed that it can appropriately segment non-standard texts like those in Twitter and Weibo and has nearly state-of-the-art accuracy on standard datasets in Japanese, Chinese, and Thai.


Bayesian econometric methods have enjoyed an increase in popularity in recent years. Econometricians, empirical economists, and policymakers are increasingly making use of Bayesian methods. The Oxford Handbook of Bayesian Econometrics is a single source about Bayesian methods in specialized fields. It contains articles by leading Bayesians on the latest developments in their specific fields of expertise. The volume provides broad coverage of the application of Bayesian econometrics in the major fields of economics and related disciplines, including macroeconomics, microeconomics, finance, and marketing. It reviews the state of the art in Bayesian econometric methodology, with articles on posterior simulation and Markov chain Monte Carlo methods, Bayesian nonparametric techniques, and the specialized tools used by Bayesian time series econometricians such as state space models and particle filtering. It also includes articles on Bayesian principles and methodology.


Sign in / Sign up

Export Citation Format

Share Document