scholarly journals Airborne Infrared and Visible Image Fusion for Target Perception Based on Target Region Segmentation and Discrete Wavelet Transform

2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Yifeng Niu ◽  
Shengtao Xu ◽  
Lizhen Wu ◽  
Weidong Hu

Infrared and visible image fusion is an important precondition of realizing target perception for unmanned aerial vehicles (UAVs), then UAV can perform various given missions. Information of texture and color in visible images are abundant, while target information in infrared images is more outstanding. The conventional fusion methods are mostly based on region segmentation; as a result, the fused image for target recognition could not be actually acquired. In this paper, a novel fusion method of airborne infrared and visible image based on target region segmentation and discrete wavelet transform (DWT) is proposed, which can gain more target information and preserve more background information. The fusion experiments are done on condition that the target is unmoving and observable both in visible and infrared images, targets are moving and observable both in visible and infrared images, and the target is observable only in an infrared image. Experimental results show that the proposed method can generate better fused image for airborne target perception.

2019 ◽  
Vol 64 (2) ◽  
pp. 211-220
Author(s):  
Sumanth Kumar Panguluri ◽  
Laavanya Mohan

Nowadays the result of infrared and visible image fusion has been utilized in significant applications like military, surveillance, remote sensing and medical imaging applications. Discrete wavelet transform based image fusion using unsharp masking is presented. DWT is used for decomposing input images (infrared, visible). Approximation and detailed coefficients are generated. For improving contrast unsharp masking has been applied on approximation coefficients. Then for merging approximation coefficients produced after unsharp masking average fusion rule is used. The rule that is used for merging detailed coefficients is max fusion rule. Finally, IDWT is used for generating a fused image. The result produced using the proposed fusion method is providing good contrast and also giving better performance results in reference to mean, entropy and standard deviation when compared with existing techniques.


2014 ◽  
Vol 14 (2) ◽  
pp. 102-108 ◽  
Author(s):  
Yong Yang ◽  
Shuying Huang ◽  
Junfeng Gao ◽  
Zhongsheng Qian

Abstract In this paper, by considering the main objective of multi-focus image fusion and the physical meaning of wavelet coefficients, a discrete wavelet transform (DWT) based fusion technique with a novel coefficients selection algorithm is presented. After the source images are decomposed by DWT, two different window-based fusion rules are separately employed to combine the low frequency and high frequency coefficients. In the method, the coefficients in the low frequency domain with maximum sharpness focus measure are selected as coefficients of the fused image, and a maximum neighboring energy based fusion scheme is proposed to select high frequency sub-bands coefficients. In order to guarantee the homogeneity of the resultant fused image, a consistency verification procedure is applied to the combined coefficients. The performance assessment of the proposed method was conducted in both synthetic and real multi-focus images. Experimental results demonstrate that the proposed method can achieve better visual quality and objective evaluation indexes than several existing fusion methods, thus being an effective multi-focus image fusion method.


2011 ◽  
Vol 128-129 ◽  
pp. 589-593 ◽  
Author(s):  
Yi Feng Niu ◽  
Sheng Tao Xu ◽  
Wei Dong Hu

Infrared and visible image fusion is an important precondition to realize target perception for unmanned aerial vehicles (UAV) based on which UAV can perform various missions. The details in visible images are abundant, while the target information is more outstanding in infrared images. However, the conventional fusion methods are mostly based on region segmentation, and then the fused image for target recognition can’t be actually acquired. In this paper, a novel fusion method of infrared and visible image based on target regions in discrete wavelet transform (DWT) domain is proposed, which can gain more target information and preserve the details. Experimental results show that our method can generate better fused image for target recognition.


2011 ◽  
Vol 1 (3) ◽  
Author(s):  
T. Sumathi ◽  
M. Hemalatha

AbstractImage fusion is the method of combining relevant information from two or more images into a single image resulting in an image that is more informative than the initial inputs. Methods for fusion include discrete wavelet transform, Laplacian pyramid based transform, curvelet based transform etc. These methods demonstrate the best performance in spatial and spectral quality of the fused image compared to other spatial methods of fusion. In particular, wavelet transform has good time-frequency characteristics. However, this characteristic cannot be extended easily to two or more dimensions with separable wavelet experiencing limited directivity when spanning a one-dimensional wavelet. This paper introduces the second generation curvelet transform and uses it to fuse images together. This method is compared against the others previously described to show that useful information can be extracted from source and fused images resulting in the production of fused images which offer clear, detailed information.


Author(s):  
Jianhua Liu ◽  
Peng Geng ◽  
Hongtao Ma

Purpose This study aims to obtain the more precise decision map to fuse the source images by Coefficient significance method. In the area of multifocus image fusion, the better decision map is very important the fusion results. In the processing of distinguishing the well-focus part with blur part in an image, the edge between the parts is more difficult to be processed. Coefficient significance is very effective in generating the better decision map to fuse the multifocus images. Design/methodology/approach The energy of Laplacian is used in the approximation coefficients of redundant discrete wavelet transform. On the other side, the coefficient significance based on statistic property of covariance is proposed to merge the detail coefficient. Findings Due to the shift-variance of the redundant discrete wavelet and the effectiveness of fusion rule, the presented fusion method is superior to the region energy in harmonic cosine wavelet domain, pixel significance with the cross bilateral filter and multiscale geometry analysis method of Ripplet transform. Originality/value In redundant discrete wavelet domain, the coefficient significance based on statistic property of covariance is proposed to merge the detail coefficient of source images.


2011 ◽  
Vol 145 ◽  
pp. 119-123
Author(s):  
Ko Chin Chang

For general image capture device, it is difficult to obtain an image with every object in focus. To solve the fusion issue of multiple same view point images with different focal settings, a novel image fusion algorithm based on local energy pattern (LGP) is proposed in this paper. Firstly, each focus images is decomposed using discrete wavelet transform (DWT) separately. Secondly, to calculate LGP with the corresponding pixel and its surrounding pixels, then use LGP to compute the new coefficient of the pixel from each transformed images with our proposed weighted fusing rules. The rules use different operations in low-bands coefficients and high-bands coefficients. Finally, the generated image is reconstructed from the new subband coefficients. Moreover, the reconstructed image can represent more detailed for the obtained scene. Experimental results demonstrate that our scheme performs better than the traditional discrete cosine transform (DCT) and discrete wavelet transform (DWT) method in both visual perception and quantitative analysis.


2020 ◽  
Vol 39 (3) ◽  
pp. 4617-4629
Author(s):  
Chengrui Gao ◽  
Feiqiang Liu ◽  
Hua Yan

Infrared and visible image fusion refers to the technology that merges the visual details of visible images and thermal feature information of infrared images; it has been extensively adopted in numerous image processing fields. In this study, a dual-tree complex wavelet transform (DTCWT) and convolutional sparse representation (CSR)-based image fusion method was proposed. In the proposed method, the infrared images and visible images were first decomposed by dual-tree complex wavelet transform to characterize their high-frequency bands and low-frequency band. Subsequently, the high-frequency bands were enhanced by guided filtering (GF), while the low-frequency band was merged through convolutional sparse representation and choose-max strategy. Lastly, the fused images were reconstructed by inverse DTCWT. In the experiment, the objective and subjective comparisons with other typical methods proved the advantage of the proposed method. To be specific, the results achieved using the proposed method were more consistent with the human vision system and contained more texture detail information.


Sign in / Sign up

Export Citation Format

Share Document