scholarly journals Infrared image and visible image fusion based on wavelet transform

Author(s):  
Zehua Zhou ◽  
Min Tan
2013 ◽  
Vol 756-759 ◽  
pp. 2850-2856 ◽  
Author(s):  
Ze Hua Zhou ◽  
Min Tan

The same scene, the infrared image and visible image fusion can concurrently take advantage of the original image information can overcome the limitations and differences of a single sensor image in terms of geometric, spectral and spatial resolution, to improve the quality of the image , which help to locate, identify and explain the physical phenomena and events. Put forward a kind of image fusion method based on wavelet transform. And for the wavelet decomposition of the frequency domain, respectively, discussed the principles of select high-frequency coefficients and low frequency coefficients, highlight the contours of parts and the weakening of the details section, fusion, image fusion has the characteristics of two or multiple images, more people or the visual characteristics of the machine, the image for further analysis and understanding, detection and identification or tracking of the target image.


2020 ◽  
Vol 17 (8) ◽  
pp. 3660-3670
Author(s):  
N. Archana ◽  
S. Mahalakshmi ◽  
R. Dhanagopal ◽  
R. Menaka

Image fusion is a one of the enhancement technique which is used to take the decision the images by the various types of sensors. Image fusion is nothing but the combination of two images which is helps to improve the quality of the image. In this paper, visible image and Infrared image are combined to acquire the informative image. Before and after image fusion, a new transformation technique is introduced to improve the quality of the image. To prove the quality of the image after applying new transformation technique, the fusion is done by four different techniques is used like Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), Non-Subsampled Contourlet Transform (NSCT) and Dual Tree Complex Wavelet Transform (DT-CWT). The comparison of following parameter values such as Entropy, Standard deviation, Mean gradient, Average pixel intensity and spatial frequency shows that proposed method is better to improve the image quality.


2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Yifeng Niu ◽  
Shengtao Xu ◽  
Lizhen Wu ◽  
Weidong Hu

Infrared and visible image fusion is an important precondition of realizing target perception for unmanned aerial vehicles (UAVs), then UAV can perform various given missions. Information of texture and color in visible images are abundant, while target information in infrared images is more outstanding. The conventional fusion methods are mostly based on region segmentation; as a result, the fused image for target recognition could not be actually acquired. In this paper, a novel fusion method of airborne infrared and visible image based on target region segmentation and discrete wavelet transform (DWT) is proposed, which can gain more target information and preserve more background information. The fusion experiments are done on condition that the target is unmoving and observable both in visible and infrared images, targets are moving and observable both in visible and infrared images, and the target is observable only in an infrared image. Experimental results show that the proposed method can generate better fused image for airborne target perception.


2020 ◽  
Vol 39 (3) ◽  
pp. 4617-4629
Author(s):  
Chengrui Gao ◽  
Feiqiang Liu ◽  
Hua Yan

Infrared and visible image fusion refers to the technology that merges the visual details of visible images and thermal feature information of infrared images; it has been extensively adopted in numerous image processing fields. In this study, a dual-tree complex wavelet transform (DTCWT) and convolutional sparse representation (CSR)-based image fusion method was proposed. In the proposed method, the infrared images and visible images were first decomposed by dual-tree complex wavelet transform to characterize their high-frequency bands and low-frequency band. Subsequently, the high-frequency bands were enhanced by guided filtering (GF), while the low-frequency band was merged through convolutional sparse representation and choose-max strategy. Lastly, the fused images were reconstructed by inverse DTCWT. In the experiment, the objective and subjective comparisons with other typical methods proved the advantage of the proposed method. To be specific, the results achieved using the proposed method were more consistent with the human vision system and contained more texture detail information.


2015 ◽  
Vol 32 (9) ◽  
pp. 1643 ◽  
Author(s):  
Xiang Yan ◽  
Hanlin Qin ◽  
Jia Li ◽  
Huixin Zhou ◽  
Jing-guo Zong

2019 ◽  
Vol 64 (2) ◽  
pp. 211-220
Author(s):  
Sumanth Kumar Panguluri ◽  
Laavanya Mohan

Nowadays the result of infrared and visible image fusion has been utilized in significant applications like military, surveillance, remote sensing and medical imaging applications. Discrete wavelet transform based image fusion using unsharp masking is presented. DWT is used for decomposing input images (infrared, visible). Approximation and detailed coefficients are generated. For improving contrast unsharp masking has been applied on approximation coefficients. Then for merging approximation coefficients produced after unsharp masking average fusion rule is used. The rule that is used for merging detailed coefficients is max fusion rule. Finally, IDWT is used for generating a fused image. The result produced using the proposed fusion method is providing good contrast and also giving better performance results in reference to mean, entropy and standard deviation when compared with existing techniques.


Chemosensors ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 75
Author(s):  
Hyuk-Ju Kwon ◽  
Sung-Hak Lee

Image fusion combines images with different information to create a single, information-rich image. The process may either involve synthesizing images using multiple exposures of the same scene, such as exposure fusion, or synthesizing images of different wavelength bands, such as visible and near-infrared (NIR) image fusion. NIR images are frequently used in surveillance systems because they are beyond the narrow perceptual range of human vision. In this paper, we propose an infrared image fusion method that combines high and low intensities for use in surveillance systems under low-light conditions. The proposed method utilizes a depth-weighted radiance map based on intensities and details to enhance local contrast and reduce noise and color distortion. The proposed method involves luminance blending, local tone mapping, and color scaling and correction. Each of these stages is processed in the LAB color space to preserve the color attributes of a visible image. The results confirm that the proposed method outperforms conventional methods.


Image fusion is the mechanism in which at least two images are consolidated into a single image holding the imperative features from each one of the first images. Emerging images are upgraded and the image content is been enhanced in the entire context, this out coming image is much more preferable than the base images. Certain circumstances in image processing need both high dimensional and high spectral information in a solitary image, which is crucial in remote sensing. Image fusion procedure incorporates intensifying, filtering, and moulding the images for better results. Efficient and imperative approaches for image fusion are enforced here. The image fusion method comprises two discrete types of images, the visible image and the infrared image. The Single Scale Retinex (SSR) is applied to the visible image to obtain an upgraded image, simultaneously Principal Component Analysis (PCA) is been applied to infrared image to obtain an image with superior contrast and colour. Further these treated images are decomposed into a multilayer image by using Laplacian Pyramid algorithm. To end with Weighted Average fusion method aids in fusing the images to reproduce the augmented fused image.


Sign in / Sign up

Export Citation Format

Share Document