scholarly journals Fusion algorithm of visible and infrared image based on anisotropic diffusion and image enhancement (capitalize only the first word in a title (or heading), the first word in a subtitle (or subheading), and any proper nouns)

PLoS ONE ◽  
2021 ◽  
Vol 16 (2) ◽  
pp. e0245563
Author(s):  
Hui Huang ◽  
Linlu Dong ◽  
Zhishuang Xue ◽  
Xiaofang Liu ◽  
Caijian Hua

Aiming at the situation that the existing visible and infrared images fusion algorithms only focus on highlighting infrared targets and neglect the performance of image details, and cannot take into account the characteristics of infrared and visible images, this paper proposes an image enhancement fusion algorithm combining Karhunen-Loeve transform and Laplacian pyramid fusion. The detail layer of the source image is obtained by anisotropic diffusion to get more abundant texture information. The infrared images adopt adaptive histogram partition and brightness correction enhancement algorithm to highlight thermal radiation targets. A novel power function enhancement algorithm that simulates illumination is proposed for visible images to improve the contrast of visible images and facilitate human observation. In order to improve the fusion quality of images, the source image and the enhanced images are transformed by Karhunen-Loeve to form new visible and infrared images. Laplacian pyramid fusion is performed on the new visible and infrared images, and superimposed with the detail layer images to obtain the fusion result. Experimental results show that the method in this paper is superior to several representative image fusion algorithms in subjective visual effects on public data sets. In terms of objective evaluation, the fusion result performed well on the 8 evaluation indicators, and its own quality was high.

2021 ◽  
Vol 63 (9) ◽  
pp. 529-533
Author(s):  
Jiali Zhang ◽  
Yupeng Tian ◽  
LiPing Ren ◽  
Jiaheng Cheng ◽  
JinChen Shi

Reflection in images is common and the removal of complex noise such as image reflection is still being explored. The problem is difficult and ill-posed, not only because there is no mixing function but also because there are no constraints in the output space (the processed image). When it comes to detecting defects on metal surfaces using infrared thermography, reflection from smooth metal surfaces can easily affect the final detection results. Therefore, it is essential to remove the reflection interference in infrared images. With the continuous application and expansion of neural networks in the field of image processing, researchers have tried to apply neural networks to remove image reflection. However, they have mainly focused on reflection interference removal in visible images and it is believed that no researchers have applied neural networks to remove reflection interference in infrared images. In this paper, the authors introduce the concept of a conditional generative adversarial network (cGAN) and propose an end-to-end trained network based on this with two types of loss: perceptual loss and adversarial loss. A self-built infrared reflection image dataset from an infrared camera is used. The experimental results demonstrate the effectiveness of this GAN for removing infrared image reflection.


Author(s):  
Han Xu ◽  
Pengwei Liang ◽  
Wei Yu ◽  
Junjun Jiang ◽  
Jiayi Ma

In this paper, we propose a new end-to-end model, called dual-discriminator conditional generative adversarial network (DDcGAN), for fusing infrared and visible images of different resolutions. Unlike the pixel-level methods and existing deep learning-based methods, the fusion task is accomplished through the adversarial process between a generator and two discriminators, in addition to the specially designed content loss. The generator is trained to generate real-like fused images to fool discriminators. The two discriminators are trained to calculate the JS divergence between the probability distribution of downsampled fused images and infrared images, and the JS divergence between the probability distribution of gradients of fused images and gradients of visible images, respectively. Thus, the fused images can compensate for the features that are not constrained by the single content loss. Consequently, the prominence of thermal targets in the infrared image and the texture details in the visible image can be preserved or even enhanced in the fused image simultaneously. Moreover, by constraining and distinguishing between the downsampled fused image and the low-resolution infrared image, DDcGAN can be preferably applied to the fusion of different resolution images. Qualitative and quantitative experiments on publicly available datasets demonstrate the superiority of our method over the state-of-the-art.


2007 ◽  
Author(s):  
Tian Si ◽  
Youtang Gao ◽  
Jianliang Qiao ◽  
Benkang Chang

2011 ◽  
Author(s):  
Xin-sai Wang ◽  
Qiang Wu ◽  
Wei-ping Wang ◽  
Ming He ◽  
Yu Liu

2016 ◽  
Author(s):  
Rongkun Xue ◽  
Wei He ◽  
Jiahui Liu ◽  
Yufeng Li

2021 ◽  
Author(s):  
Zhang Xin ◽  
Yang Yu ◽  
Yike Shi ◽  
Jiahao Zhang ◽  
Zefeng Zhang ◽  
...  

2010 ◽  
Author(s):  
Jie Zhang ◽  
Ziji Liu ◽  
Yanzhao Lei ◽  
Yadong Jiang

Sign in / Sign up

Export Citation Format

Share Document