Improved Depth Estimation for Occlusion Scenes Using a Light-Field Camera

2020 ◽  
Vol 86 (7) ◽  
pp. 443-456
Author(s):  
Changkun Yang ◽  
Zhaoqin Liu ◽  
Kaichang Di ◽  
Changqing Hu ◽  
Yexin Wang ◽  
...  

With the development of light-field imaging technology, depth estimation using light-field cameras has become a hot topic in recent years. Even through many algorithms have achieved good performance for depth estimation using light-field cameras, removing the influence of occlusion, especially multi-occlusion, is still a challenging task. The photo-consistency assumption does not hold in the presence of occlusions, which makes most depth estimation of light-field imaging unreliable. In this article, a novel method to handle complex occlusion in depth estimation of light-field imaging is proposed. The method can effectively identify occluded pixels using a refocusing algorithm, accurately select unoccluded views using the adaptive unoccluded-view identification algorithm, and then improve the depth estimation by computing the cost volumes in the unoccluded views. Experimental results demonstrate the advantages of our proposed algorithm compared with conventional state-of-the art algorithms on both synthetic and real light-field data sets.

Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6061
Author(s):  
Lei Han ◽  
Xiaohua Huang ◽  
Zhan Shi ◽  
Shengnan Zheng

Depth estimation based on light field imaging is a new methodology that has succeeded the traditional binocular stereo matching and depth from monocular images. Significant progress has been made in light-field depth estimation. Nevertheless, the balance between computational time and the accuracy of depth estimation is still worth exploring. The geometry in light field imaging is the basis of depth estimation, and the abundant light-field data provides convenience for applying deep learning algorithms. The Epipolar Plane Image (EPI) generated from the light-field data has a line texture containing geometric information. The slope of the line is proportional to the depth of the corresponding object. Considering the light field depth estimation as a spatial density prediction task, we design a convolutional neural network (ESTNet) to estimate the accurate depth quickly. Inspired by the strong image feature extraction ability of convolutional neural networks, especially for texture images, we propose to generate EPI synthetic images from light field data as the input of ESTNet to improve the effect of feature extraction and depth estimation. The architecture of ESTNet is characterized by three input streams, encoding-decoding structure, and skipconnections. The three input streams receive horizontal EPI synthetic image (EPIh), vertical EPI synthetic image (EPIv), and central view image (CV), respectively. EPIh and EPIv contain rich texture and depth cues, while CV provides pixel position association information. ESTNet consists of two stages: encoding and decoding. The encoding stage includes several convolution modules, and correspondingly, the decoding stage embodies some transposed convolution modules. In addition to the forward propagation of the network ESTNet, some skip-connections are added between the convolution module and the corresponding transposed convolution module to fuse the shallow local and deep semantic features. ESTNet is trained on one part of a synthetic light-field dataset and then tested on another part of the synthetic light-field dataset and real light-field dataset. Ablation experiments show that our ESTNet structure is reasonable. Experiments on the synthetic light-field dataset and real light-field dataset show that our ESTNet can balance the accuracy of depth estimation and computational time.


2019 ◽  
Vol 2019 (3) ◽  
pp. 636-1-636-6
Author(s):  
H. Harlyn Baker ◽  
Gregorij Kurillo ◽  
Allan Miller ◽  
Alessandro Temil ◽  
Tom Defanti ◽  
...  

Author(s):  
Shuyao Zhou ◽  
Tianqian Zhu ◽  
Kanle Shi ◽  
Yazi Li ◽  
Wen Zheng ◽  
...  

AbstractLight fields are vector functions that map the geometry of light rays to the corresponding plenoptic attributes. They describe the holographic information of scenes by representing the amount of light flowing in every direction through every point in space. The physical concept of light fields was first proposed in 1936, and light fields are becoming increasingly important in the field of computer graphics, especially with the fast growth of computing capacity as well as network bandwidth. In this article, light field imaging is reviewed from the following aspects with an emphasis on the achievements of the past five years: (1) depth estimation, (2) content editing, (3) image quality, (4) scene reconstruction and view synthesis, and (5) industrial products because the technologies of lights fields also intersect with industrial applications. State-of-the-art research has focused on light field acquisition, manipulation, and display. In addition, the research has extended from the laboratory to industry. According to these achievements and challenges, in the near future, the applications of light fields could offer more portability, accessibility, compatibility, and ability to visualize the world.


Author(s):  
Tadd T. Truscott ◽  
Jesse Belden ◽  
Joseph R. Nielson ◽  
David J. Daily ◽  
Scott L. Thomson

2018 ◽  
Vol 57 (11) ◽  
pp. 2841 ◽  
Author(s):  
Takeshi Shimano ◽  
Yusuke Nakamura ◽  
Kazuyuki Tajima ◽  
Mayu Sao ◽  
Taku Hoshizawa

Sign in / Sign up

Export Citation Format

Share Document