scholarly journals Distance Error Correction in Time-of-Flight Cameras Using Asynchronous Integration Time

Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1156
Author(s):  
Eu-Tteum Baek ◽  
Hyung-Jeong Yang ◽  
Soo-Hyung Kim ◽  
Gueesang Lee ◽  
Hieyong Jeong

A distance map captured using a time-of-flight (ToF) depth sensor has fundamental problems, such as ambiguous depth information in shiny or dark surfaces, optical noise, and mismatched boundaries. Severe depth errors exist in shiny and dark surfaces owing to excess reflection and excess absorption of light, respectively. Dealing with this problem has been a challenge due to the inherent hardware limitations of ToF, which measures the distance using the number of reflected photons. This study proposes a distance error correction method using three ToF sensors, set to different integration times to address the ambiguity in depth information. First, the three ToF depth sensors are installed horizontally at different integration times to capture distance maps at different integration times. Given the amplitude maps and error regions are estimated based on the amount of light, the estimated error regions are refined by exploiting the accurate depth information from the neighboring depth sensors that use different integration times. Moreover, we propose a new optical noise reduction filter that considers the distribution of the depth information biased toward one side. Experimental results verified that the proposed method overcomes the drawbacks of ToF cameras and provides enhanced distance maps.

2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Lalith Sharan ◽  
Lukas Burger ◽  
Georgii Kostiuchik ◽  
Ivo Wolf ◽  
Matthias Karck ◽  
...  

AbstractIn endoscopy, depth estimation is a task that potentially helps in quantifying visual information for better scene understanding. A plethora of depth estimation algorithms have been proposed in the computer vision community. The endoscopic domain however, differs from the typical depth estimation scenario due to differences in the setup and nature of the scene. Furthermore, it is unfeasible to obtain ground truth depth information owing to an unsuitable detection range of off-the-shelf depth sensors and difficulties in setting up a depth-sensor in a surgical environment. In this paper, an existing self-supervised approach, called Monodepth [1], from the field of autonomous driving is applied to a novel dataset of stereo-endoscopic images from reconstructive mitral valve surgery. While it is already known that endoscopic scenes are more challenging than outdoor driving scenes, the paper performs experiments to quantify the comparison, and describe the domain gap and challenges involved in the transfer of these methods.


2017 ◽  
Author(s):  
Peter Fuersattel ◽  
Christian Schaller ◽  
Andreas Maier ◽  
Christian Riess

2002 ◽  
Vol 11 (2) ◽  
pp. 176-188 ◽  
Author(s):  
Yuichi Ohta ◽  
Yasuyuki Sugaya ◽  
Hiroki Igarashi ◽  
Toshikazu Ohtsuki ◽  
Kaito Taguchi

In mixed reality, occlusions and shadows are important to realize a natural fusion between the real and virtual worlds. In order to achieve this, it is necessary to acquire dense depth information of the real world from the observer's viewing position. The depth sensor must be attached to the see-through HMD of the observer because he/she moves around. The sensor should be small and light enough to be attached to the HMD and should be able to produce a reliable dense depth map at video rate. Unfortunately, however, no such depth sensors are available. We propose a client/server depth-sensing scheme to solve this problem. A server sensor located at a fixed position in the real world acquires the 3-D information of the world, and a client sensor attached to each observer produces the depth map from his/her viewing position using the 3-D information supplied from the server. Multiple clients can share the 3-D information of the server; we call it Share-Z. In this paper, the concept and merits of Share-Z are discussed. An experimental system developed to demonstrate the feasibility of Share-Z is also described.


2018 ◽  
Vol 2018 (13) ◽  
pp. 464-1-464-6
Author(s):  
Yunseok Song ◽  
Yo-Sung Ho

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2144
Author(s):  
Stefan Reitmann ◽  
Lorenzo Neumann ◽  
Bernhard Jung

Common Machine-Learning (ML) approaches for scene classification require a large amount of training data. However, for classification of depth sensor data, in contrast to image data, relatively few databases are publicly available and manual generation of semantically labeled 3D point clouds is an even more time-consuming task. To simplify the training data generation process for a wide range of domains, we have developed the BLAINDER add-on package for the open-source 3D modeling software Blender, which enables a largely automated generation of semantically annotated point-cloud data in virtual 3D environments. In this paper, we focus on classical depth-sensing techniques Light Detection and Ranging (LiDAR) and Sound Navigation and Ranging (Sonar). Within the BLAINDER add-on, different depth sensors can be loaded from presets, customized sensors can be implemented and different environmental conditions (e.g., influence of rain, dust) can be simulated. The semantically labeled data can be exported to various 2D and 3D formats and are thus optimized for different ML applications and visualizations. In addition, semantically labeled images can be exported using the rendering functionalities of Blender.


Sign in / Sign up

Export Citation Format

Share Document