A Novel Image Fusion Method for Multi-focus Image Fusion Based on the Nonsubsampled Contourlet Transform and Multi-scale Statistic

2014 ◽  
Vol 11 (6) ◽  
pp. 1715-1721
Author(s):  
Chunyan You
Optik ◽  
2015 ◽  
Vol 126 (20) ◽  
pp. 2508-2511 ◽  
Author(s):  
Jingjing Wang ◽  
Qian Li ◽  
Zhenhong Jia ◽  
Nikola Kasabov ◽  
Jie Yang

2013 ◽  
Vol 12 (4) ◽  
pp. 749-755 ◽  
Author(s):  
Shen Yu ◽  
Ren Enen ◽  
Dang Jian-Wu ◽  
Wang Guo-Hua ◽  
Feng Xin

Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1362
Author(s):  
Hui Wan ◽  
Xianlun Tang ◽  
Zhiqin Zhu ◽  
Weisheng Li

Multi-focus image fusion is an important method used to combine the focused parts from source multi-focus images into a single full-focus image. Currently, to address the problem of multi-focus image fusion, the key is on how to accurately detect the focus regions, especially when the source images captured by cameras produce anisotropic blur and unregistration. This paper proposes a new multi-focus image fusion method based on the multi-scale decomposition of complementary information. Firstly, this method uses two groups of large-scale and small-scale decomposition schemes that are structurally complementary, to perform two-scale double-layer singular value decomposition of the image separately and obtain low-frequency and high-frequency components. Then, the low-frequency components are fused by a rule that integrates image local energy with edge energy. The high-frequency components are fused by the parameter-adaptive pulse-coupled neural network model (PA-PCNN), and according to the feature information contained in each decomposition layer of the high-frequency components, different detailed features are selected as the external stimulus input of the PA-PCNN. Finally, according to the two-scale decomposition of the source image that is structure complementary, and the fusion of high and low frequency components, two initial decision maps with complementary information are obtained. By refining the initial decision graph, the final fusion decision map is obtained to complete the image fusion. In addition, the proposed method is compared with 10 state-of-the-art approaches to verify its effectiveness. The experimental results show that the proposed method can more accurately distinguish the focused and non-focused areas in the case of image pre-registration and unregistration, and the subjective and objective evaluation indicators are slightly better than those of the existing methods.


2013 ◽  
Vol 756-759 ◽  
pp. 3542-3548 ◽  
Author(s):  
Li Juan Ma ◽  
Chun Hui Zhao

In order to solve the problem of spectral distortion and the fuzzy texture in visible and infrared image fusion technology, a novel visible and infrared image fusion method based on the Nonsubsampled Contourlet Transform (NSCT) and Pulse Coupled Neural Networks (PCNN) is proposed in this paper. First, we gain three components of visible image, luminance I, chrominance H and saturation S, using the IHS transform. Then, we gain three coefficients, low frequency sub-band, passband sub-band and high frequency coefficient by decomposing the component I and infrared image with the help of the NSCT. Next, we use weighted-sum method to fuse the low frequency sub-band and PCNN method to fuse the other sub-band coefficient respectively. At last, we gain the fusion image by using the inverse IHS transform on the fusion component I gained by the inverse NSCT transform. Experiments show that our method have better fusion quality and can be more better to keep the visible spectral and detail information than some traditional methods such as, Laplace method, Wavelet method and Lifting Wavelet method.


2013 ◽  
Vol 28 (3) ◽  
pp. 429-434
Author(s):  
傅瑶 FU Yao ◽  
孙雪晨 SUN Xue-chen ◽  
薛旭成 XUE Xu-cheng ◽  
韩诚山 HAN Cheng-shan ◽  
赵运隆 ZHAO Yun-long ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document