scholarly journals Image Fusion of Brain Images using Redundant Discrete Wavelet Transform

2013 ◽  
Vol 74 (4) ◽  
pp. 7-11
Author(s):  
Umesh B.Mantale ◽  
Vishwajit B. Gaikwad
Author(s):  
Jianhua Liu ◽  
Peng Geng ◽  
Hongtao Ma

Purpose This study aims to obtain the more precise decision map to fuse the source images by Coefficient significance method. In the area of multifocus image fusion, the better decision map is very important the fusion results. In the processing of distinguishing the well-focus part with blur part in an image, the edge between the parts is more difficult to be processed. Coefficient significance is very effective in generating the better decision map to fuse the multifocus images. Design/methodology/approach The energy of Laplacian is used in the approximation coefficients of redundant discrete wavelet transform. On the other side, the coefficient significance based on statistic property of covariance is proposed to merge the detail coefficient. Findings Due to the shift-variance of the redundant discrete wavelet and the effectiveness of fusion rule, the presented fusion method is superior to the region energy in harmonic cosine wavelet domain, pixel significance with the cross bilateral filter and multiscale geometry analysis method of Ripplet transform. Originality/value In redundant discrete wavelet domain, the coefficient significance based on statistic property of covariance is proposed to merge the detail coefficient of source images.


2014 ◽  
Vol 14 (2) ◽  
pp. 102-108 ◽  
Author(s):  
Yong Yang ◽  
Shuying Huang ◽  
Junfeng Gao ◽  
Zhongsheng Qian

Abstract In this paper, by considering the main objective of multi-focus image fusion and the physical meaning of wavelet coefficients, a discrete wavelet transform (DWT) based fusion technique with a novel coefficients selection algorithm is presented. After the source images are decomposed by DWT, two different window-based fusion rules are separately employed to combine the low frequency and high frequency coefficients. In the method, the coefficients in the low frequency domain with maximum sharpness focus measure are selected as coefficients of the fused image, and a maximum neighboring energy based fusion scheme is proposed to select high frequency sub-bands coefficients. In order to guarantee the homogeneity of the resultant fused image, a consistency verification procedure is applied to the combined coefficients. The performance assessment of the proposed method was conducted in both synthetic and real multi-focus images. Experimental results demonstrate that the proposed method can achieve better visual quality and objective evaluation indexes than several existing fusion methods, thus being an effective multi-focus image fusion method.


2016 ◽  
Vol 14 (4) ◽  
pp. 1662-1668 ◽  
Author(s):  
Ernano Arrais Junior ◽  
Ricardo Alexandro de Medeiros Valentim ◽  
Glaucio Bezerra Brandao

2011 ◽  
Vol 145 ◽  
pp. 119-123
Author(s):  
Ko Chin Chang

For general image capture device, it is difficult to obtain an image with every object in focus. To solve the fusion issue of multiple same view point images with different focal settings, a novel image fusion algorithm based on local energy pattern (LGP) is proposed in this paper. Firstly, each focus images is decomposed using discrete wavelet transform (DWT) separately. Secondly, to calculate LGP with the corresponding pixel and its surrounding pixels, then use LGP to compute the new coefficient of the pixel from each transformed images with our proposed weighted fusing rules. The rules use different operations in low-bands coefficients and high-bands coefficients. Finally, the generated image is reconstructed from the new subband coefficients. Moreover, the reconstructed image can represent more detailed for the obtained scene. Experimental results demonstrate that our scheme performs better than the traditional discrete cosine transform (DCT) and discrete wavelet transform (DWT) method in both visual perception and quantitative analysis.


2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Yifeng Niu ◽  
Shengtao Xu ◽  
Lizhen Wu ◽  
Weidong Hu

Infrared and visible image fusion is an important precondition of realizing target perception for unmanned aerial vehicles (UAVs), then UAV can perform various given missions. Information of texture and color in visible images are abundant, while target information in infrared images is more outstanding. The conventional fusion methods are mostly based on region segmentation; as a result, the fused image for target recognition could not be actually acquired. In this paper, a novel fusion method of airborne infrared and visible image based on target region segmentation and discrete wavelet transform (DWT) is proposed, which can gain more target information and preserve more background information. The fusion experiments are done on condition that the target is unmoving and observable both in visible and infrared images, targets are moving and observable both in visible and infrared images, and the target is observable only in an infrared image. Experimental results show that the proposed method can generate better fused image for airborne target perception.


Sign in / Sign up

Export Citation Format

Share Document