Infrared and Visible Image Fusion Using SalientDecompose Based on Generative Adversarial Network

2021 ◽  
Author(s):  
Lei Chen ◽  
Jun Han
2020 ◽  
Vol 10 (2) ◽  
pp. 554 ◽  
Author(s):  
Dongdong Xu ◽  
Yongcheng Wang ◽  
Shuyan Xu ◽  
Kaiguang Zhu ◽  
Ning Zhang ◽  
...  

Infrared and visible image fusion can obtain combined images with salient hidden objectives and abundant visible details simultaneously. In this paper, we propose a novel method for infrared and visible image fusion with a deep learning framework based on a generative adversarial network (GAN) and a residual network (ResNet). The fusion is accomplished with an adversarial game and directed by the unique loss functions. The generator with residual blocks and skip connections can extract deep features of source image pairs and generate an elementary fused image with infrared thermal radiation information and visible texture information, and more details in visible images are added to the final images through the discriminator. It is unnecessary to design the activity level measurements and fusion rules manually, which are now implemented automatically. Also, there are no complicated multi-scale transforms in this method, so the computational cost and complexity can be reduced. Experiment results demonstrate that the proposed method eventually gets desirable images, achieving better performance in objective assessment and visual quality compared with nine representative infrared and visible image fusion methods.


2020 ◽  
Vol 104 ◽  
pp. 103144 ◽  
Author(s):  
Jiangtao Xu ◽  
Xingping Shi ◽  
Shuzhen Qin ◽  
Kaige Lu ◽  
Han Wang ◽  
...  

Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 376
Author(s):  
Jilei Hou ◽  
Dazhi Zhang ◽  
Wei Wu ◽  
Jiayi Ma ◽  
Huabing Zhou

This paper proposes a new generative adversarial network for infrared and visible image fusion based on semantic segmentation (SSGAN), which can consider not only the low-level features of infrared and visible images, but also the high-level semantic information. Source images can be divided into foregrounds and backgrounds by semantic masks. The generator with a dual-encoder-single-decoder framework is used to extract the feature of foregrounds and backgrounds by different encoder paths. Moreover, the discriminator’s input image is designed based on semantic segmentation, which is obtained by combining the foregrounds of the infrared images with the backgrounds of the visible images. Consequently, the prominence of thermal targets in the infrared images and texture details in the visible images can be preserved in the fused images simultaneously. Qualitative and quantitative experiments on publicly available datasets demonstrate that the proposed approach can significantly outperform the state-of-the-art methods.


2021 ◽  
Vol 11 (19) ◽  
pp. 9255
Author(s):  
Syeda Minahil ◽  
Jun-Hyung Kim ◽  
Youngbae Hwang

In infrared (IR) and visible image fusion, the significant information is extracted from each source image and integrated into a single image with comprehensive data. We observe that the salient regions in the infrared image contain targets of interests. Therefore, we enforce spatial adaptive weights derived from the infrared images. In this paper, a Generative Adversarial Network (GAN)-based fusion method is proposed for infrared and visible image fusion. Based on the end-to-end network structure with dual discriminators, a patch-wise discrimination is applied to reduce blurry artifact from the previous image-level approaches. A new loss function is also proposed to use constructed weight maps which direct the adversarial training of GAN in a manner such that the informative regions of the infrared images are preserved. Experiments are performed on the two datasets and ablation studies are also conducted. The qualitative and quantitative analysis shows that we achieve competitive results compared to the existing fusion methods.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Dazhi Zhang ◽  
Jilei Hou ◽  
Wei Wu ◽  
Tao Lu ◽  
Huabing Zhou

Infrared and visible image fusion needs to preserve both the salient target of the infrared image and the texture details of the visible image. Therefore, an infrared and visible image fusion method based on saliency detection is proposed. Firstly, the saliency map of the infrared image is obtained by saliency detection. Then, the specific loss function and network architecture are designed based on the saliency map to improve the performance of the fusion algorithm. Specifically, the saliency map is normalized to [0, 1], used as a weight map to constrain the loss function. At the same time, the saliency map is binarized to extract salient regions and nonsalient regions. And, a generative adversarial network with dual discriminators is obtained. The two discriminators are used to distinguish the salient regions and the nonsalient regions, respectively, to promote the generator to generate better fusion results. The experimental results show that the fusion results of our method are better than those of the existing methods in both subjective and objective aspects.


Sign in / Sign up

Export Citation Format

Share Document