scholarly journals A DWT Based Novel Multimodal Image Fusion Method

2021 ◽  
Vol 38 (3) ◽  
pp. 607-617
Author(s):  
Sumanth Kumar Panguluri ◽  
Laavanya Mohan

Nowadays multimodal image fusion has been majorly utilized as an important processing tool in various image related applications. For capturing useful information different sensors have been developed. Mainly such sensors are infrared (IR) image sensor and visible (VI) image sensor. Fusing both these sensors provides better and accurate scene information. The major application areas where this fused image has been mostly used are military, surveillance, and remote sensing. For better identification of targets and to understand overall scene information, the fused image has to provide better contrast and more edge information. This paper introduces a novel multimodal image fusion method mainly for improving contrast and as well as edge information. Primary step of this algorithm is to resize source images. The 3×3 sharpen filter and morphology hat transform are applied separately on resized IR image and VI image. DWT transform has been used to produce "low-frequency" and "high-frequency" sub-bands. "Filters based mean-weighted fusion rule" and "Filters based max-weighted fusion rule" are newly introduced in this algorithm for combining "low-frequency" sub-bands and "high-frequency" sub-bands respectively. Fused image reconstruction is done with IDWT. Proposed method has outperformed and shown improved results in subjective manner and objectively than similar existing techniques.

2013 ◽  
Vol 427-429 ◽  
pp. 1589-1592
Author(s):  
Zhong Jie Xiao

The study proposed an improved NSCT fusion method based on the infrared and visible light images characteristics and fusion requirement. This paper improved the high-frequency coefficient and low-frequency coefficient fusion rules. The low-frequency sub-band images adopted the pixel feature energy weighted fusion rule. The high-frequency sub-band images adopted the neighborhood variance feature information fusion rule. The fusion experiment results show that this algorithm has good robustness. It could effectively extract edges and texture information. The fused images have abundance scene information and clear target. So this algorithm is an effective infrared and visible image fusion method.


Author(s):  
Yahui Zhu ◽  
Li Gao

To overcome the shortcomings of traditional image fusion algorithms based on multiscale transform, an infrared and visible image fusion method based on compound decomposition and intuitionistic fuzzy set is proposed. Firstly, the non-subsampled contour transform is used to decompose the source image into low-frequency coefficients and high-frequency coefficients. Then the potential low-rank representation model is used to decompose low-frequency coefficients into basic sub-bands and salient sub-bands, in which the visual saliency map is taken as weighted coefficient. The weighted summation of low-frequency basic sub-bands is used as the fusion rule. The maximum absolute value of low-frequency salient sub-bands is also used as the fusion rule. The two fusion rules are superimposed to obtain low-frequency fusion coefficients. The intuitionistic fuzzy entropy is used as the fusion rule to measure the texture information and edge information of high-frequency coefficients. Finally, the infrared visible fusion image is obtained with the non-subsampled contour inverse transform. The comparison results on the objective and subjective evaluation of several sets of fusion images show that our image fusion method can effectively keep edge information and rich information on source images, thus producing better visual quality and objective evaluation than other image fusion methods.


Author(s):  
Hui Zhang ◽  
Xinning Han ◽  
Rui Zhang

In the process of multimodal image fusion, how to improve the visual effect after the image fused, while taking into account the protection of energy and the extraction of details, has attracted more and more attention in recent years. Based on the research of visual saliency and the final action-level measurement of the base layer, a multimodal image fusion method based on a guided filter is proposed in this paper. Firstly, multi-scale decomposition of a guided filter is used to decompose the two source images into a small-scale layer, large-scale layer and base layer. The fusion rule of the maximum absolute value is adopted in the small-scale layer, the weight fusion rule based on regular visual parameters is adopted in the large-scale layer and the fusion rule based on activity-level measurement is adopted in the base layer. Finally, the fused three scales are laminated into the final fused image. The experimental results show that the proposed method can improve the image edge processing and visual effect in multimodal image fusion.


Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 472 ◽  
Author(s):  
Sarmad Maqsood ◽  
Umer Javed ◽  
Muhammad Mohsin Riaz ◽  
Muhammad Muzammil ◽  
Fazal Muhammad ◽  
...  

Multi-focus image fusion is a very essential method of obtaining an all focus image from multiple source images. The fused image eliminates the out of focus regions, and the resultant image contains sharp and focused regions. A novel multiscale image fusion system based on contrast enhancement, spatial gradient information and multiscale image matting is proposed to extract the focused region information from multiple source images. In the proposed image fusion approach, the multi-focus source images are firstly refined over an image enhancement algorithm so that the intensity distribution is enhanced for superior visualization. The edge detection method based on a spatial gradient is employed for obtaining the edge information from the contrast stretched images. This improved edge information is further utilized by a multiscale window technique to produce local and global activity maps. Furthermore, a trimap and decision maps are obtained based upon the information provided by these near and far focus activity maps. Finally, the fused image is achieved by using an enhanced decision maps and fusion rule. The proposed multiscale image matting (MSIM) makes full use of the spatial consistency and the correlation among source images and, therefore, obtains superior performance at object boundaries compared to region-based methods. The achievement of the proposed method is compared with some of the latest techniques by performing qualitative and quantitative evaluation.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Yong Yang ◽  
Wenjuan Zheng ◽  
Shuying Huang

The aim of multifocus image fusion is to fuse the images taken from the same scene with different focuses to obtain a resultant image with all objects in focus. In this paper, a novel multifocus image fusion method based on human visual system (HVS) and back propagation (BP) neural network is presented. Three features which reflect the clarity of a pixel are firstly extracted and used to train a BP neural network to determine which pixel is clearer. The clearer pixels are then used to construct the initial fused image. Thirdly, the focused regions are detected by measuring the similarity between the source images and the initial fused image followed by morphological opening and closing operations. Finally, the final fused image is obtained by a fusion rule for those focused regions. Experimental results show that the proposed method can provide better performance and outperform several existing popular fusion methods in terms of both objective and subjective evaluations.


2014 ◽  
Vol 530-531 ◽  
pp. 394-402
Author(s):  
Ze Tao Jiang ◽  
Li Wen Zhang ◽  
Le Zhou

At present, image fusion universally exists problem that fuzzy edge, sparse texture. To solve this problem, this study proposes an image fusion method based on the combination of Lifting Wavelet and Median Filter. The method adopts different fusion rules. For the low frequency coefficient, the low frequency scale coefficients have had the convolution do the square respectively to get enhanced edge of the image fusion. Then the details information of original image is extracted by measuring region characteristics. For high frequency coefficient, the high frequency parts are denoised by the Median Filter, and then neighborhood spatial frequency and consistency verification fusion rule is adopted to the fusion of detail sub-images. Compared with Weighted Average and Regional Energy , experimental results show that edge and texture information are the most. Method in study solves the fuzzy edge and sparse texture in a certain degree,which has strong practical value in image fusion.


2021 ◽  
Vol 10 (2) ◽  
pp. 911-916
Author(s):  
C. Jittawiriyanukoon ◽  
V. Srisarkun

The regular image fusion method based on scalar has the problem how to prioritize and proportionally enrich image details in multi-sensor network. Based on multiple sensors to fuse and manipulate patterns of computer vision is practical. A fusion (integration) rule, bit-depth conversion, and truncation (due to conflict of size) on the image information are studied. Through multi-sensor images, the fusion rule based on weighted priority is employed to restructure prescriptive details of a fused image. Investigational results confirm that the associated details between multiple images are possibly fused, the prescription is executed and finally, features are improved. Visualization for both spatial and frequency domains to support the image analysis is also presented.


2014 ◽  
Vol 989-994 ◽  
pp. 1082-1087
Author(s):  
Yan Chun Yang ◽  
Jian Wu Dang ◽  
Yang Ping Wang

In order to further improve the quality of medical image fusion,an improved medical image fusion method, based on nonsubsampled contourlet transform (NSCT),is proposed in the paper. A fusion rule based on the improved pulse coupled neural network (PCNN) is adopted in low frequency sub-band coefficient. Because human visual is more sensitive to all local region pixels instead of single pixel,it is more reasonable that the region information stimulates PCNN instead of single pixel. Each neuron of PCNN model is stimulated by the region spatial frequency of low frequency sub-band coefficient .Low frequency sub-band coefficient is determined by the times of firing. When choosing the bandpass directional sub-band coefficients, the directional characteristics of NSCT has been made best use of in the paper.A fusion rule based on sum-modified Laplacian is presented in bandpass directional sub-band cosfficients.The experiment results show that the proposed method can greatly improve the quality of fusion image compared with traditional fusion methods.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Peiguang Wang ◽  
Hua Tian ◽  
Wei Zheng

Nonsubsampled Contourlet transform (NSCT) has properties such as multiscale, localization, multidirection, and shift invariance, but only limits the signal analysis to the time frequency domain. Fractional Fourier transform (FRFT) develops the signal analysis to fractional domain, has many super performances, but is unable to attribute the signal partial characteristic. A novel image fusion algorithm based on FRFT and NSCT is proposed and demonstrated in this paper. Firstly, take FRFT on the two source images to obtain fractional domain matrices. Secondly, the NSCT is performed on the aforementioned matrices to acquire multiscale and multidirection images. Thirdly, take fusion rule for low-frequency subband coefficients and directional bandpass subband coefficients to get the fused coefficients. Finally, the fused image is obtained by performing the inverse NSCT and inverse FRFT on the combined coefficients. Three modes images and three fusion rules are demonstrated in the proposed algorithm test. The simulation results show that the proposed fusion approach is better than the methods based on NSCT at the same parameters.


2019 ◽  
Vol 28 (4) ◽  
pp. 505-516
Author(s):  
Wei-bin Chen ◽  
Mingxiao Hu ◽  
Lai Zhou ◽  
Hongbin Gu ◽  
Xin Zhang

Abstract Multi-focus image fusion means fusing a completely clear image with a set of images of the same scene and under the same imaging conditions with different focus points. In order to get a clear image that contains all relevant objects in an area, the multi-focus image fusion algorithm is proposed based on wavelet transform. Firstly, the multi-focus images were decomposed by wavelet transform. Secondly, the wavelet coefficients of the approximant and detail sub-images are fused respectively based on the fusion rule. Finally, the fused image was obtained by using the inverse wavelet transform. Among them, for the low-frequency and high-frequency coefficients, we present a fusion rule based on the weighted ratios and the weighted gradient with the improved edge detection operator. The experimental results illustrate that the proposed algorithm is effective for retaining the detailed images.


Sign in / Sign up

Export Citation Format

Share Document