scholarly journals Entropy-Based Image Fusion with Joint Sparse Representation and Rolling Guidance Filter

Entropy ◽  
2020 ◽  
Vol 22 (1) ◽  
pp. 118 ◽  
Author(s):  
Yudan Liu ◽  
Xiaomin Yang ◽  
Rongzhu Zhang ◽  
Marcelo Keese Albertini ◽  
Turgay Celik ◽  
...  

Image fusion is a very practical technology that can be applied in many fields, such as medicine, remote sensing and surveillance. An image fusion method using multi-scale decomposition and joint sparse representation is introduced in this paper. First, joint sparse representation is applied to decompose two source images into a common image and two innovation images. Second, two initial weight maps are generated by filtering the two source images separately. Final weight maps are obtained by joint bilateral filtering according to the initial weight maps. Then, the multi-scale decomposition of the innovation images is performed through the rolling guide filter. Finally, the final weight maps are used to generate the fused innovation image. The fused innovation image and the common image are combined to generate the ultimate fused image. The experimental results show that our method’s average metrics are: mutual information ( M I )—5.3377, feature mutual information ( F M I )—0.5600, normalized weighted edge preservation value ( Q A B / F )—0.6978 and nonlinear correlation information entropy ( N C I E )—0.8226. Our method can achieve better performance compared to the state-of-the-art methods in visual perception and objective quantification.

Author(s):  
Liu Xian-Hong ◽  
Chen Zhi-Bin

Background: A multi-scale multidirectional image fusion method is proposed, which introduces the Nonsubsampled Directional Filter Bank (NSDFB) into the multi-scale edge-preserving decomposition based on the fast guided filter. Methods: The proposed method has the advantages of preserving edges and extracting directional information simultaneously. In order to get better-fused sub-bands coefficients, a Convolutional Sparse Representation (CSR) based approximation sub-bands fusion rule is introduced and a Pulse Coupled Neural Network (PCNN) based detail sub-bands fusion strategy with New Sum of Modified Laplacian (NSML) to be the external input is also presented simultaneously. Results: Experimental results have demonstrated the superiority of the proposed method over conventional methods in terms of visual effects and objective evaluations. Conclusion: In this paper, combining fast guided filter and nonsubsampled directional filter bank, a multi-scale directional edge-preserving filter image fusion method is proposed. The proposed method has the features of edge-preserving and extracting directional information.


2017 ◽  
pp. 711-723
Author(s):  
Vikrant Bhateja ◽  
Abhinav Krishn ◽  
Himanshi Patel ◽  
Akanksha Sahu

Medical image fusion facilitates the retrieval of complementary information from medical images and has been employed diversely for computer-aided diagnosis of life threatening diseases. Fusion has been performed using various approaches such as Pyramidal, Multi-resolution, multi-scale etc. Each and every approach of fusion depicts only a particular feature (i.e. the information content or the structural properties of an image). Therefore, this paper presents a comparative analysis and evaluation of multi-modal medical image fusion methodologies employing wavelet as a multi-resolution approach and ridgelet as a multi-scale approach. The current work tends to highlight upon the utility of these approaches according to the requirement of features in the fused image. Principal Component Analysis (PCA) based fusion algorithm has been employed in both ridgelet and wavelet domains for purpose of minimisation of redundancies. Simulations have been performed for different sets of MR and CT-scan images taken from ‘The Whole Brain Atlas'. The performance evaluation has been carried out using different parameters of image quality evaluation like: Entropy (E), Fusion Factor (FF), Structural Similarity Index (SSIM) and Edge Strength (QFAB). The outcome of this analysis highlights the trade-off between the retrieval of information content and the morphological details in finally fused image in wavelet and ridgelet domains.


Author(s):  
Chengfang Zhang

Multifocus image fusion can obtain an image with all objects in focus, which is beneficial for understanding the target scene. Multiscale transform (MST) and sparse representation (SR) have been widely used in multifocus image fusion. However, the contrast of the fused image is lost after multiscale reconstruction, and fine details tend to be smoothed for SR-based fusion. In this paper, we propose a fusion method based on MST and convolutional sparse representation (CSR) to address the inherent defects of both the MST- and SR-based fusion methods. MST is first performed on each source image to obtain the low-frequency components and detailed directional components. Then, CSR is applied in the low-pass fusion, while the high-pass bands are fused using the popular “max-absolute” rule as the activity level measurement. The fused image is finally obtained by performing inverse MST on the fused coefficients. The experimental results on multifocus images show that the proposed algorithm exhibits state-of-the-art performance in terms of definition.


2013 ◽  
Vol 52 (5) ◽  
pp. 057006 ◽  
Author(s):  
Qiheng Zhang ◽  
Yuli Fu ◽  
Haifeng Li ◽  
Jian Zou

2014 ◽  
Vol 536-537 ◽  
pp. 111-114 ◽  
Author(s):  
De Xiang Zhang ◽  
Hong Hai Wang ◽  
Feng Xue

Curvelet transform is the combination of the multi-scale analysis and multi-directional analysis transforms, which is more suitable for objects with curves. Applications of the curvelet transform have increased rapidly in the field of image fusion. Firstly, using the curvelet transform, several polarization images can be decomposed into low-frequency coefficients and high-frequency coefficients with multi-scales and multi-directions. For the low-frequency coefficients, the average fusion method is used. For the each directional high frequency sub-band coefficients, the larger value of region variance information measurement is used to select the better coefficients for fusion. At last the fused image can be obtained by utilizing inverse transform for fused curvelet coefficients. In the present work an algorithm for image fusion based on the curvelet transform was implemented, analyzed, and compared with a wavelet-based fusion algorithm. Experimental results show that the proposed algorithm works better in preserving the edges and texture information compared with the wavelet-based image fusion algorithms.


2017 ◽  
Vol 11 (11) ◽  
pp. 1041-1049 ◽  
Author(s):  
Fatemeh Fakhari ◽  
Mohammad.R. Mosavi ◽  
Mehdi.M. Lajvardi

Sign in / Sign up

Export Citation Format

Share Document