Efficient hole filling and depth enhancement based on texture image and depth map consistency

Author(s):  
Ting-An Chang ◽  
Jung-Ping Kuo ◽  
Jar-Ferr Yang
2014 ◽  
Vol 60 (2) ◽  
pp. 394-404 ◽  
Author(s):  
Chao Yao ◽  
Tammam Tillo ◽  
Yao Zhao ◽  
Jimin Xiao ◽  
Huihui Bai ◽  
...  

2021 ◽  
Author(s):  
Kuan-Ting Lee ◽  
En-Rwei Liu ◽  
Jar-Ferr Yang ◽  
Li Hong

Abstract With the rapid development of 3D coding and display technologies, numerous applications are emerging to target human immersive entertainments. To achieve a prime 3D visual experience, high accuracy depth maps play a crucial role. However, depth maps retrieved from most devices still suffer inaccuracies at object boundaries. Therefore, a depth enhancement system is usually needed to correct the error. Recent developments by applying deep learning to deep enhancement have shown their promising improvement. In this paper, we propose a deep depth enhancement network system that effectively corrects the inaccurate depth using color images as a guide. The proposed network contains both depth and image branches, where we combine a new set of features from the image branch with those from the depth branch. Experimental results show that the proposed system achieves a better depth correction performance than state of the art advanced networks. The ablation study reveals that the proposed loss functions in use of image information can enhance depth map accuracy effectively.


2017 ◽  
Vol 14 (2) ◽  
pp. 172988141769556 ◽  
Author(s):  
Hengyu Li ◽  
Hang Liu ◽  
Ning Cao ◽  
Yan Peng ◽  
Shaorong Xie ◽  
...  

This article concerns the problems of a defective depth map and limited field of view of Kinect-style RGB-D sensors. An anisotropic diffusion based hole-filling method is proposed to recover invalid depth data in the depth map. The field of view of the Kinect-style RGB-D sensor is extended by stitching depth and color images from several RGB-D sensors. By aligning the depth map with the color image, the registration data calculated by registering color images can be used to stitch depth and color images into a depth and color panoramic image concurrently in real time. Experiments show that the proposed stitching method can generate a RGB-D panorama with no invalid depth data and little distortion in real time and can be extended to incorporate more RGB-D sensors to construct even a 360° field of view panoramic RGB-D image.


2017 ◽  
Author(s):  
Xiaohui Yang ◽  
Zhiquan Feng ◽  
Tao Xu ◽  
Yan Jiang ◽  
Haokui Tang

Sign in / Sign up

Export Citation Format

Share Document