Super Resolving of the Depth Map for 3D Reconstruction of Underwater Terrain Using Kinect

Author(s):  
Yu Nakagawa ◽  
Keita Kihara ◽  
Ryunosuke Tadoh ◽  
Seiichi Serikawa ◽  
Huimin Lu ◽  
...  
Keyword(s):  
Author(s):  
Y. Song ◽  
K. Köser ◽  
T. Kwasnitschka ◽  
R. Koch

<p><strong>Abstract.</strong> With the rapid development and availability of underwater imaging technologies, underwater visual recording is widely used for a variety of tasks. However, quantitative imaging and photogrammetry in the underwater case has a lot of challenges (strong geometry distortion and radiometry issues) that limit the traditional photogrammetric workflow in underwater applications. This paper presents an iterative refinement approach to cope with refraction induced distortion while building on top of a standard photogrammetry pipeline. The approach uses approximate geometry to compensate for water refraction effects in images and then brings the new images into the next iteration of 3D reconstruction until the update of resulting depth maps becomes neglectable. Afterwards, the corrected depth map can also be used to compensate the attenuation effect in order to get a more realistic color for the 3D model. To verify the geometry improvement of the proposed approach, a set of images with air-water refraction effect were rendered from a ground truth model and the iterative refinement approach was applied to improve the 3D reconstruction. At the end, this paper also shows its application results for 3D reconstruction of a dump site for underwater munition in the Baltic Sea for which a visual monitoring approach is desired.</p>


2021 ◽  
Vol 1920 (1) ◽  
pp. 012075
Author(s):  
Tiansheng Wu ◽  
Hui Wang ◽  
Yanling Wang ◽  
Min Liang ◽  
Jie Li

2020 ◽  
Vol 32 (15) ◽  
pp. 11217-11228
Author(s):  
Yinzhang Ding ◽  
Lu Lin ◽  
Lianghao Wang ◽  
Ming Zhang ◽  
Dongxiao Li

2022 ◽  
Vol 355 ◽  
pp. 03026
Author(s):  
Shiheng Zhang ◽  
Shaopeng Zhang ◽  
Jianyang Chen ◽  
Xiuling Wang

3D reconstruction of human body model is a very important research topic in 3D reconstruction and also a challenging research direction in engineering field. In this paper, the whole pipeline flow of 3D reconstruction of human body model based on incremental motion recovery structure is proposed. Use mobile phone to collect images from different angles and screen them; Secondly, feature extraction and matching under SIFT operator, sparse reconstruction of incremental motion recovery structure, dense reconstruction based on depth map and other processes are carried out. Poisson surface reconstruction is finally carried out to achieve model reconstruction. Experiments show that the effect subject of the reconstructed model is clear.


Author(s):  
Xiaowen Teng ◽  
Guangsheng Zhou ◽  
Yuxuan Wu ◽  
Chenglong Huang ◽  
Wanjing Dong ◽  
...  

The 3D reconstruction method using RGB-D camera has a good balance in hardware cost, point cloud quality and automation. However, due to the limitation of inherent structure and imaging principle, the acquired point cloud has problems such as a lot of noise and difficult registration. This paper proposes a three-dimensional reconstruction method using Azure Kinect to solve these inherent problems. Shoot color map, depth map and near-infrared image of the target from six perspectives by Azure Kinect sensor. Multiply the 8-bit infrared image binarization with the general RGB-D image alignment result provided by Microsoft to remove ghost images and most of the background noise. In order to filter the floating point and outlier noise of the point cloud, a neighborhood maximum filtering method is proposed to filter out the abrupt points in the depth map. The floating points in the point cloud are removed before generating the point cloud, and then using the through filter filters out outlier noise. Aiming at the shortcomings of the classic ICP algorithm, an improved method is proposed. By continuously reducing the size of the down-sampling grid and the distance threshold between the corresponding points, the point clouds of each view are continuously registered three times, until get the complete color point cloud. A large number of experimental results on rape plants show that the point cloud accuracy obtained by this method is 0.739mm, a complete scan time is 338.4 seconds, and the color reduction is high. Compared with a laser scanner, the proposed method has considerable reconstruction accuracy and a significantly ahead of the reconstruction speed, but the hardware cost is much lower and it is easy to automate the scanning system. This research shows a low-cost, high-precision 3D reconstruction technology, which has the potential to be widely used for non-destructive measurement of crop phenotype.


2020 ◽  
pp. short15-1-short15-9
Author(s):  
Vladimir Kniaz ◽  
Vladimir Knyaz ◽  
Vladimir Mizginov

Reconstruction of face 3D shape and its texture is a challenging task in the modern anthropology. While a skilled anthropologist could reconstruct an appearance of a prehistoric human from its skull, there are no automated methods to date for automatic anthropological face 3D reconstruction and texturing. We propose a deep learning framework for synthesis and visualization of photorealistic textures for 3D face reconstruction of prehistoric human. Our framework leverages a joint face-skull model based on generative adversarial networks. Specifically, we train two image-to-image translation models to separate 3D face reconstruction and texturing. The first model translates an input depth map of a human skull to a possible depth map of its face and its semantic parts labeling. The second model, performs a multimodal translation of the generated semantic labeling to multiple photorealistic textures. We generate a dataset consisting of 3D models of human faces and skulls to train our 3D reconstruction model. The dataset includes paired samples obtained from computed tomography and unpaired samples representing 3D models of skulls of prehistoric human. We train our texture synthesis model on the CelebAMask-HQ dataset. We evaluate our model qualitatively and quantitatively to demonstrate that it provides robust 3D face reconstruction of prehistoric human with multimodal photorealistic texturing.


2021 ◽  
Vol 2021 (18) ◽  
pp. 69-1-69-11
Author(s):  
Yin Wang ◽  
Davi He ◽  
Zillion Lin ◽  
George Chiu ◽  
Jan Allebach

In this paper, a low cost, single camera, double mirror system that can be built in a desktop nail printer will be described. The usage of this system is to capture an image of a fingernail and to generate the 3D shape of the nail. The nail’s depth map will be estimated from this rendered 3D nail shape. The paper will describe the camera calibration process and explain the calibration theory for this proposed system. Then a 3D reconstruction method will be introduced, as well. Experimental results will be shown in the paper, which illustrate the accuracy of the system to handle the rendering task.


Author(s):  
WEI JIANG ◽  
SHIGEKI SUGIMOTO ◽  
MASATOSHI OKUTOMI

In this paper, we present a novel approach to imaging a panoramic (360°) environment and computing its dense depth map. Our approach adopts a multi-baseline stereo strategy using a set of multi-perspective panoramas where large baseline lengths are available. We design two image acquisition rigs for capturing such multi-perspective panoramas. The first one is composed of two parallel stereo cameras. By rotating the rig about a vertical axis, we generate four multi-perspective panoramas by resampling the regular perspective images captured by the stereo cameras. Then a depth map is estimated from the four multi-perspective panoramas and an original perspective image using a multi-baseline matching technique with different types of epipolar constraints. The second one is composed of a single camera and two mirrors. By rotating the rig, we acquire a spatio-temporal volume that is made up of the sequential images captured by the camera. Then we estimate a depth map by extracting trajectories from the spatio-temporal volume by using a multi-baseline stereo technique by considering occlusions. We can consider both rotating rigs as a single rotating camera with a very large field of view (FOV), that offers a large baseline length in depth estimation. In addition, compared with a previous approach using two multi-perspective panoramas from a single rotating camera, our approach can reduce matching errors due to image noise, repeated patterns, and occlusions by multi-baseline stereo techniques. Experimental results using both synthetic and real images show that our approach produces high quality panoramic 3D reconstruction.


Sign in / Sign up

Export Citation Format

Share Document