scholarly journals Image Fusion with Contrast Improving and Feature Preserving

2015 ◽  
Vol 2015 ◽  
pp. 1-14
Author(s):  
Din-Chang Tseng ◽  
Yu-Shuo Liu ◽  
Chang-Min Chou

The goal of image fusion is to obtain a fused image that contains most significant information in all input images which were captured by different sensors from the same scene. In particular, the fusion process should improve the contrast and keep the integrity of significant features from input images. In this paper, we propose a region-based image fusion method to fuse spatially registered visible and infrared images while improving the contrast and preserving the significant features of input images. At first, the proposed method decomposes input images into base layers and detail layers using a bilateral filter. Then the base layers of the input images are segmented into regions. Third, a region-based decision map is proposed to represent the importance of every region. The decision map is obtained by calculating the weights of regions according to the gray-level difference between each region and its neighboring regions in the base layers. At last, the detail layers and the base layers are separately fused by different fusion rules based on the same decision map to generate a final fused image. Experimental results qualitatively and quantitatively demonstrate that the proposed method can improve the contrast of fused images and preserve more features of input images than several previous image fusion methods.

Author(s):  
Zhiguang Yang ◽  
Youping Chen ◽  
Zhuliang Le ◽  
Yong Ma

Abstract In this paper, a novel multi-exposure image fusion method based on generative adversarial networks (termed as GANFuse) is presented. Conventional multi-exposure image fusion methods improve their fusion performance by designing sophisticated activity-level measurement and fusion rules. However, these methods have a limited success in complex fusion tasks. Inspired by the recent FusionGAN which firstly utilizes generative adversarial networks (GAN) to fuse infrared and visible images and achieves promising performance, we improve its architecture and customize it in the task of extreme exposure image fusion. To be specific, in order to keep content of extreme exposure image pairs in the fused image, we increase the number of discriminators differentiating between fused image and extreme exposure image pairs. While, a generator network is trained to generate fused images. Through the adversarial relationship between generator and discriminators, the fused image will contain more information from extreme exposure image pairs. Thus, this relationship can realize better performance of fusion. In addition, the method we proposed is an end-to-end and unsupervised learning model, which can avoid designing hand-crafted features and does not require a number of ground truth images for training. We conduct qualitative and quantitative experiments on a public dataset, and the experimental result shows that the proposed model demonstrates better fusion ability than existing multi-exposure image fusion methods in both visual effect and evaluation metrics.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4556 ◽  
Author(s):  
Yaochen Liu ◽  
Lili Dong ◽  
Yuanyuan Ji ◽  
Wenhai Xu

In many actual applications, fused image is essential to contain high-quality details for achieving a comprehensive representation of the real scene. However, existing image fusion methods suffer from loss of details because of the error accumulations of sequential tasks. This paper proposes a novel fusion method to preserve details of infrared and visible images by combining new decomposition, feature extraction, and fusion scheme. For decomposition, different from the most decomposition methods by guided filter, the guidance image contains only the strong edge of the source image but no other interference information so that rich tiny details can be decomposed into the detailed part. Then, according to the different characteristics of infrared and visible detail parts, a rough convolutional neural network (CNN) and a sophisticated CNN are designed so that various features can be fully extracted. To integrate the extracted features, we also present a multi-layer features fusion strategy through discrete cosine transform (DCT), which not only highlights significant features but also enhances details. Moreover, the base parts are fused by weighting method. Finally, the fused image is obtained by adding the fused detail and base part. Different from the general image fusion methods, our method not only retains the target region of source image but also enhances background in the fused image. In addition, compared with state-of-the-art fusion methods, our proposed fusion method has many advantages, including (i) better visual quality of fused-image subjective evaluation, and (ii) better objective assessment for those images.


2012 ◽  
Vol 239-240 ◽  
pp. 1432-1436
Author(s):  
Zhuan Zheng Zhao

Image Fusion is integrating two or more sensors at the same time or at different times of image or videos equenece to generate a new interpretation of this scene. Its main purpose is increasing reliability or image resolution by redueing uncertainty through redundancy of different images.In this paper, a image fusion method based on contourlet transform is presented. The algorithm can fuse corresponding information in different resolutions and directions, which makes the fused image clearer and more abundant in details. Meanwhile, because of the fuzzy logic’s capacity of resolving uncertain problems, it overcomes the drawbacks of traditional fusion algorithm based on contourlet transform, and integrates as much information as possible into the fused image.


2019 ◽  
Vol 9 (17) ◽  
pp. 3612
Author(s):  
Liao ◽  
Chen ◽  
Mo

As the focal length of an optical lens in a conventional camera is limited, it is usually arduous to obtain an image in which each object is focused. This problem can be solved by multi-focus image fusion. In this paper, we propose an entirely new multi-focus image fusion method based on decision map and sparse representation (DMSR). First, we obtained a decision map by analyzing low-scale images with sparse representation, measuring the effective clarity level, and using spatial frequency methods to process uncertain areas. Subsequently, the transitional area around the focus boundary was determined by the decision map, and we implemented the transitional area fusion based on sparse representation. The experimental results show that the proposed method is superior to the other five fusion methods, both in terms of visual effect and quantitative evaluation.


2021 ◽  
pp. 1-20
Author(s):  
Yun Wang ◽  
Xin Jin ◽  
Jie Yang ◽  
Qian Jiang ◽  
Yue Tang ◽  
...  

Multi-focus image fusion is a technique that integrates the focused areas in a pair or set of source images with the same scene into a fully focused image. Inspired by transfer learning, this paper proposes a novel color multi-focus image fusion method based on deep learning. First, color multi-focus source images are fed into VGG-19 network, and the parameters of convolutional layer of the VGG-19 network are then migrated to a neural network containing multilayer convolutional layers and multilayer skip-connection structures for feature extraction. Second, the initial decision maps are generated using the reconstructed feature maps of a deconvolution module. Third, the initial decision maps are refined and processed to obtain the second decision maps, and then the source images are fused to obtain the initial fused images based on the second decision maps. Finally, the final fused image is produced by comparing the Q ABF metrics of the initial fused images. The experimental results show that the proposed method can effectively improve the segmentation performance of the focused and unfocused areas in the source images, and the generated fused images are superior in both subjective and objective metrics compared with most contrast methods.


2011 ◽  
Vol 467-469 ◽  
pp. 1092-1096 ◽  
Author(s):  
Guang Ming Zhang ◽  
Zhi Ming Cui

Graph cuts as an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields, meanwhile beamlet transform as time-frequency and multiresolution analysis tool is often used in the domain of image processing, especially for image fusion. By analyzing the characters of DSA medical image, this paper proposes a novel DSA image fusion method which is combining beamlet transform and graph cuts theory. Firstly, the image was decomposed by beamlet transform to obtain the different subbands coefficients. Then an energy function based on graph cuts theory was constructed to adjust the weight of these coefficients to obtain an optimum fusion object. At last, an inverse of the beamlet transform reconstruct a synthesized DSA image which could contain more integrated accurate detail information of blood vessels. By contrast, the efficiency of our method is better than other traditional fusion methods.


2013 ◽  
Vol 448-453 ◽  
pp. 3621-3624 ◽  
Author(s):  
Ming Jing Li ◽  
Yu Bing Dong ◽  
Xiao Li Wang

Image fusion method based on the non multi-scale take the original image as object of study, using various fusion rule of image fusion to fuse images, but not decomposition or transform to original images. So, it can also be called simple multi sensor image fusion methods. Its advantages are low computational complexity and simple principle. Image fusion method based on the non multi-scale is currently the most widely used image fusion methods. The basic principle of fuse method is directly to select large gray, small gray and weighted average among pixel on the source image, to fuse into a new image. Simple pixel level image fusion method mainly includes the pixel gray value being average or weighted average, pixel gray value being selected large and pixel gray value being selected small, etc. Basic principle of fusion process was introduced in detail in this paper, and pixel level fusion algorithm at present was summed up. Simulation results on fusion are presented to illustrate the proposed fusion scheme. In practice, fusion algorithm was selected according to imaging characteristics being retained.


2012 ◽  
Vol 546-547 ◽  
pp. 806-810 ◽  
Author(s):  
Xu Zhang ◽  
Yun Hui Yan ◽  
Wen Hui Chen ◽  
Jun Jun Chen

To solve the problem of the pseudo-Gibbs phenomena around singularities when we implement image fusion with images of strip surface detects obtained from different angles, a novel image fusion method based on Bandelet-PCNN(Pulse coupled neural networks) is proposed. Low-pass sub-band coefficient of source image by Bandelet is inputted into PCNN. And the coefficient is selected by ignition frequency by the neuron iteration. At last the fused image can be got through inverse Bandelet using the coefficient and Geometric flow parameters. Experimental results demonstrate that for the scrip surface detects of scratches, abrasions and pit, fused image effectively combines defect information of multiple image sources. Contrast to the classical wavelet transform and Bandelet transform the method reserves more detailed and comprehensive detect information. Consequently the method proposed in this paper is more effective.


2018 ◽  
Vol 189 ◽  
pp. 10021
Author(s):  
Xiaobei Wang ◽  
Rencan Nie ◽  
Xiaopeng Guo

Medical image fusion plays an important role in detection and treatment of disease. Although numerous medical image fusion methods have been proposed, most of them decrease the contrast and lose the image information. In this paper, a novel MRI and CT image fusion method is proposed combining rolling guidance filter, structure tensor, and nonsubsampled shearlet transform (NSST). First, the rolling guidance filter and the sum-modified laplacian (SML) operator are introduced in the algorithm to construct the weight maps in non-linear domain, then the fused gradient is firstly obtained by a new weighted structure tensor fusion method, and the fused image is firstly acquired in NSST domain, finally, a new energy functional is defined to constrain the gradient and pixel information of the final fused image close to the pre-fused gradient and the pre-fused image, experimental results show that the proposed method can retain the edge information of source images effectively and preserve the reduction of contrast.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Yong Yang ◽  
Wenjuan Zheng ◽  
Shuying Huang

The aim of multifocus image fusion is to fuse the images taken from the same scene with different focuses to obtain a resultant image with all objects in focus. In this paper, a novel multifocus image fusion method based on human visual system (HVS) and back propagation (BP) neural network is presented. Three features which reflect the clarity of a pixel are firstly extracted and used to train a BP neural network to determine which pixel is clearer. The clearer pixels are then used to construct the initial fused image. Thirdly, the focused regions are detected by measuring the similarity between the source images and the initial fused image followed by morphological opening and closing operations. Finally, the final fused image is obtained by a fusion rule for those focused regions. Experimental results show that the proposed method can provide better performance and outperform several existing popular fusion methods in terms of both objective and subjective evaluations.


Sign in / Sign up

Export Citation Format

Share Document