scholarly journals Deep Color Transfer for Color-Plus-Mono Dual Cameras

Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2743
Author(s):  
Hae Woong Jang ◽  
Yong Ju Jung

A few approaches have studied image fusion using color-plus-mono dual cameras to improve the image quality in low-light shooting. Among them, the color transfer approach, which transfers the color information of a color image to a mono image, is considered to be promising for obtaining improved images with less noise and more detail. However, the color transfer algorithms rely heavily on appropriate color hints from a given color image. Unreliable color hints caused by errors in stereo matching of a color-plus-mono image pair can generate various visual artifacts in the final fused image. This study proposes a novel color transfer method that seeks reliable color hints from a color image and colorizes a corresponding mono image with reliable color hints that are based on a deep learning model. Specifically, a color-hint-based mask generation algorithm is developed to obtain reliable color hints. It removes unreliable color pixels using a reliability map computed by the binocular just-noticeable-difference model. In addition, a deep colorization network that utilizes structural information is proposed for solving the color bleeding artifact problem. The experimental results demonstrate that the proposed method provides better results than the existing image fusion algorithms for dual cameras.

2011 ◽  
Vol 255-260 ◽  
pp. 2072-2076
Author(s):  
Yi Yong Han ◽  
Jun Ju Zhang ◽  
Ben Kang Chang ◽  
Yi Hui Yuan ◽  
Hui Xu

Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we present a new approach using structural similarity index for assessing quality in image fusion. The advantages of our measures are that they do not require a reference image and can be easily computed. Numerous simulations demonstrate that our measures are conform to subjective evaluations and can be able to assess different image fusion methods.


2012 ◽  
Vol 433-440 ◽  
pp. 5436-5442
Author(s):  
Lei Li

The pseudo-color processing for target identification and tracking is very meaningful Experimental results show that the pseudo-color image fusion is a very effective methods. This paper presents a false color image fusion based on the new method. Fusion using wavelet transform grayscale images, find the gray fused image and the difference between the original image, respectively, as the image of l, α, β components are color fusion image, and then after the color transformation, the final false color fused image. The results showed that the color fusion image colors more vivid, more in line with human visual characteristics.


2014 ◽  
Vol 511-512 ◽  
pp. 462-466
Author(s):  
Shi Hong Xu ◽  
Guo Qing Huang ◽  
Cun Chao Liu ◽  
Chun Ping Xiong

A natural color fusion method for infrared and low-light-level image is proposed. This method utilizes image fusion and color transfer. The fused image uses sparse representation to merge the source images information to be assigned to the Y channel. And then the I and Q channel is combined using Toets method, which extracts the common component from the source images. Finally, the false-color image is obtained by using color transfer technology to the prior pseudo-color YIQ image. Experiments show that the result of our method is information that is more salient, has a higher color contrast, and a more natural color appearance when compared with those produced by the traditional coloration algorithm.


Author(s):  
Xiuming Sun ◽  
◽  
Weina Wu ◽  
Peng Geng ◽  
Lin Lu ◽  
...  

In order to achieve the multi-focus image fusion task, a sparse representation method based on quaternion for multi-focus image fusion is proposed in this paper. Firstly, the RGB color information of each pixel in the color image is represented by quaternion based on the relevant knowledge of computational mathematics, and the color image pixel is processed as a whole vector to maintain the relevant information between the three color channels. Secondly, the dictionary represented by quaternion and the sparse coefficient represented by quaternion are obtained by using the our proposed sparse representation model. Thirdly, the coefficient fusion is carried out by using the “max-L1” rule. Finally, the fused sparse coefficient and dictionary are used for image reconstruction to obtain the quaternion fused image, which is then converted into RGB color multi-focus fused image. Our method belongs to computational mathematics, and uses the relevant knowledge in the field of computational mathematics to help us carry out the experiment. The experimental results show that the method has achieved good results in visual quality and objective evaluation.


Author(s):  
Shaheera Rashwan ◽  
Walaa Sheta

The main objective of hyper/multispectral image fusion is producing a composite color image that allows for an appropriate visualization of the relevant spatial and spectral information. In this paper, we propose a general framework for spectral weighting-based image fusion. The proposed methodology relies on weight updates conducted using nature-inspired algorithms and a goodness-of-fit criterion defined as the average root mean square error. Simulations on four public data sets and a recent Landsat 8 image of Brullus Lake, Egypt, as an area of study prove the efficiency of the proposed framework. The purpose of the study is to present a framework of multi-band image fusion that produces a fused image of high quality to be further used in computer processing and the results show that the image produced by the presented framework has the highest quality compared with some of the state-of-the art algorithms. To prove the increase in the image quality, we used general quality metrics such as Universal Image Quality Index, Mutual Information, the Variance and Information Measure.


Optik ◽  
2014 ◽  
Vol 125 (20) ◽  
pp. 6010-6016 ◽  
Author(s):  
Xuelian Yu ◽  
Jianle Ren ◽  
Qian Chen ◽  
Xiubao Sui

Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 863
Author(s):  
Vidas Raudonis ◽  
Agne Paulauskaite-Taraseviciene ◽  
Kristina Sutiene

Background: Cell detection and counting is of essential importance in evaluating the quality of early-stage embryo. Full automation of this process remains a challenging task due to different cell size, shape, the presence of incomplete cell boundaries, partially or fully overlapping cells. Moreover, the algorithm to be developed should process a large number of image data of different quality in a reasonable amount of time. Methods: Multi-focus image fusion approach based on deep learning U-Net architecture is proposed in the paper, which allows reducing the amount of data up to 7 times without losing spectral information required for embryo enhancement in the microscopic image. Results: The experiment includes the visual and quantitative analysis by estimating the image similarity metrics and processing times, which is compared to the results achieved by two wellknown techniques—Inverse Laplacian Pyramid Transform and Enhanced Correlation Coefficient Maximization. Conclusion: Comparatively, the image fusion time is substantially improved for different image resolutions, whilst ensuring the high quality of the fused image.


2021 ◽  
pp. 1-13
Author(s):  
N. Aishwarya ◽  
C. BennilaThangammal ◽  
N.G. Praveena

Getting a complete description of scene with all the relevant objects in focus is a hot research area in surveillance, medicine and machine vision applications. In this work, transform based fusion method called as NSCT-FMO, is introduced to integrate the image pairs having different focus features. The NSCT-FMO approach basically contains four steps. Initially, the NSCT is applied on the input images to acquire the approximation and detailed structural information. Then, the approximation sub band coefficients are merged by employing the novel Focus Measure Optimization (FMO) approach. Next, the detailed sub-images are combined using Phase Congruency (PC). Finally, an inverse NSCT operation is conducted on synthesized sub images to obtain the initial synthesized image. To optimize the initial fused image, an initial decision map is first constructed and morphological post-processing technique is applied to get the final map. With the help of resultant map, the final synthesized output is produced by the selection of focused pixels from input images. Simulation analysis show that the NSCT-FMO approach achieves fair results as compared to traditional MST based methods both in qualitative and quantitative assessments.


Sign in / Sign up

Export Citation Format

Share Document