Polarimetric SAR image fusion using nonnegative matrix factorisation and improved-RGB model

2010 ◽  
Vol 46 (20) ◽  
pp. 1399 ◽  
Author(s):  
R. Guo ◽  
L. Zhang ◽  
M. Xing ◽  
J. Li
2016 ◽  
Vol 12 ◽  
pp. P873-P874
Author(s):  
Nienke M.E. Scheltens ◽  
Betty M. Tijms ◽  
Teddy Koene ◽  
Frederik Barkhof ◽  
Charlotte E. Teunissen ◽  
...  

2009 ◽  
Vol 2009 ◽  
pp. 1-19 ◽  
Author(s):  
M. W. Spratling ◽  
K. De Meyer ◽  
R. Kompass

This paper demonstrates that nonnegative matrix factorisation is mathematically related to a class of neural networks that employ negative feedback as a mechanism of competition. This observation inspires a novel learning algorithm which we call Divisive Input Modulation (DIM). The proposed algorithm provides a mathematically simple and computationally efficient method for the unsupervised learning of image components, even in conditions where these elementary features overlap considerably. To test the proposed algorithm, a novel artificial task is introduced which is similar to the frequently-used bars problem but employs squares rather than bars to increase the degree of overlap between components. Using this task, we investigate how the proposed method performs on the parsing of artificial images composed of overlapping features, given the correct representation of the individual components; and secondly, we investigate how well it can learn the elementary components from artificial training images. We compare the performance of the proposed algorithm with its predecessors including variations on these algorithms that have produced state-of-the-art performance on the bars problem. The proposed algorithm is more successful than its predecessors in dealing with overlap and occlusion in the artificial task that has been used to assess performance.


2009 ◽  
Vol 2009 ◽  
pp. 1-17 ◽  
Author(s):  
Ali Taylan Cemgil

We describe nonnegative matrix factorisation (NMF) with a Kullback-Leibler (KL) error measure in a statistical framework, with a hierarchical generative model consisting of an observation and a prior component. Omitting the prior leads to the standard KL-NMF algorithms as special cases, where maximum likelihood parameter estimation is carried out via the Expectation-Maximisation (EM) algorithm. Starting from this view, we develop full Bayesian inference via variational Bayes or Monte Carlo. Our construction retains conjugacy and enables us to develop more powerful models while retaining attractive features of standard NMF such as monotonic convergence and easy implementation. We illustrate our approach on model order selection and image reconstruction.


Sign in / Sign up

Export Citation Format

Share Document