scholarly journals Color image generation for screen-scanning holographic display

2015 ◽  
Vol 23 (21) ◽  
pp. 26986 ◽  
Author(s):  
Yasuhiro Takaki ◽  
Yuji Matsumoto ◽  
Tatsumi Nakajima
Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3387 ◽  
Author(s):  
Hyun-Koo Kim ◽  
Kook-Yeol Yoo ◽  
Ho-Youl Jung

In this paper, a modified encoder-decoder structured fully convolutional network (ED-FCN) is proposed to generate the camera-like color image from the light detection and ranging (LiDAR) reflection image. Previously, we showed the possibility to generate a color image from a heterogeneous source using the asymmetric ED-FCN. In addition, modified ED-FCNs, i.e., UNET and selected connection UNET (SC-UNET), have been successfully applied to the biomedical image segmentation and concealed-object detection for military purposes, respectively. In this paper, we apply the SC-UNET to generate a color image from a heterogeneous image. Various connections between encoder and decoder are analyzed. The LiDAR reflection image has only 5.28% valid values, i.e., its data are extremely sparse. The severe sparseness of the reflection image limits the generation performance when the UNET is applied directly to this heterogeneous image generation. In this paper, we present a methodology of network connection in SC-UNET that considers the sparseness of each level in the encoder network and the similarity between the same levels of encoder and decoder networks. The simulation results show that the proposed SC-UNET with the connection between encoder and decoder at two lowest levels yields improvements of 3.87 dB and 0.17 in peak signal-to-noise ratio and structural similarity, respectively, over the conventional asymmetric ED-FCN. The methodology presented in this paper would be a powerful tool for generating data from heterogeneous sources.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5414
Author(s):  
Hyun-Koo Kim ◽  
Kook-Yeol Yoo ◽  
Ho-Youl Jung

Recently, it has been reported that a camera-captured-like color image can be generated from the reflection data of 3D light detection and ranging (LiDAR). In this paper, we present that the color image can also be generated from the range data of LiDAR. We propose deep learning networks that generate color images by fusing reflection and range data from LiDAR point clouds. In the proposed networks, the two datasets are fused in three ways—early, mid, and last fusion techniques. The baseline network is the encoder-decoder structured fully convolution network (ED-FCN). The image generation performances were evaluated according to source types, including reflection data-only, range data-only, and fusion of the two datasets. The well-known KITTI evaluation data were used for training and verification. The simulation results showed that the proposed last fusion method yields improvements of 0.53 dB, 0.49 dB, and 0.02 in gray-scale peak signal-to-noise ratio (PSNR), color-scale PSNR, and structural similarity index measure (SSIM), respectively, over the conventional reflection-based ED-FCN. Besides, the last fusion method can be applied to real-time applications with an average processing time of 13.56 ms per frame. The methodology presented in this paper would be a powerful tool for generating data from two or more heterogeneous sources.


Sensors ◽  
2019 ◽  
Vol 19 (21) ◽  
pp. 4818 ◽  
Author(s):  
Hyun-Koo Kim ◽  
Kook-Yeol Yoo ◽  
Ju H. Park ◽  
Ho-Youl Jung

In this paper, we propose a method of generating a color image from light detection and ranging (LiDAR) 3D reflection intensity. The proposed method is composed of two steps: projection of LiDAR 3D reflection intensity into 2D intensity, and color image generation from the projected intensity by using a fully convolutional network (FCN). The color image should be generated from a very sparse projected intensity image. For this reason, the FCN is designed to have an asymmetric network structure, i.e., the layer depth of the decoder in the FCN is deeper than that of the encoder. The well-known KITTI dataset for various scenarios is used for the proposed FCN training and performance evaluation. Performance of the asymmetric network structures are empirically analyzed for various depth combinations for the encoder and decoder. Through simulations, it is shown that the proposed method generates fairly good visual quality of images while maintaining almost the same color as the ground truth image. Moreover, the proposed FCN has much higher performance than conventional interpolation methods and generative adversarial network based Pix2Pix. One interesting result is that the proposed FCN produces shadow-free and daylight color images. This result is caused by the fact that the LiDAR sensor data is produced by the light reflection and is, therefore, not affected by sunlight and shadow.


Author(s):  
Zhiwu Huang ◽  
Jiqing Wu ◽  
Luc Van Gool

Generative modeling over natural images is one of the most fundamental machine learning problems. However, few modern generative models, including Wasserstein Generative Adversarial Nets (WGANs), are studied on manifold-valued images that are frequently encountered in real-world applications. To fill the gap, this paper first formulates the problem of generating manifold-valued images and exploits three typical instances: hue-saturation-value (HSV) color image generation, chromaticity-brightness (CB) color image generation, and diffusion-tensor (DT) image generation. For the proposed generative modeling problem, we then introduce a theorem of optimal transport to derive a new Wasserstein distance of data distributions on complete manifolds, enabling us to achieve a tractable objective under the WGAN framework. In addition, we recommend three benchmark datasets that are CIFAR-10 HSV/CB color images, ImageNet HSV/CB color images, UCL DT image datasets. On the three datasets, we experimentally demonstrate the proposed manifold-aware WGAN model can generate more plausible manifold-valued images than its competitors.


1990 ◽  
Vol 14 (4) ◽  
pp. 351-360 ◽  
Author(s):  
Yih-ping Huang
Keyword(s):  

Author(s):  
Muhammad Haris ◽  
◽  
Kazuhito Sawase ◽  
Muhammad Rahmat Widyanto ◽  
Hajime Nobuhara ◽  
...  

An efficient super resolution algorithm based on edge direction is proposed based on the dimensional reduction of color images and 3 types of edge direction. The basic idea is to reduce image – especially color image – dimensions and to interpolate pixels by using 3 simple edge directions – vertical, horizontal, and diagonal. The proposed algorithm conceivably eliminate more color artifacts than Bicubic. The results of experiments using 30 natural images confirm that PSNR of the proposed method achieve the same quality as Fast Curvature Based Interpolation (FCBI). We confirmed that computation time for the proposed method was 40% shorter for RGB and 20% shorter for grayscale images than the previous fastest method, i.e., FCBI. We show efficient panorama image generation based on the proposed super resolution method as one application of our proposal.


Sign in / Sign up

Export Citation Format

Share Document