scholarly journals Point Cloud Scene Completion of Obstructed Building Facades with Generative Adversarial Inpainting

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5029
Author(s):  
Jingdao Chen ◽  
John Seon Keun Yi ◽  
Mark Kahoush ◽  
Erin S. Cho ◽  
Yong K. Cho

Collecting 3D point cloud data of buildings is important for many applications such as urban mapping, renovation, preservation, and energy simulation. However, laser-scanned point clouds are often difficult to analyze, visualize, and interpret due to incompletely scanned building facades caused by numerous sources of defects such as noise, occlusions, and moving objects. Several point cloud scene completion algorithms have been proposed in the literature, but they have been mostly applied to individual objects or small-scale indoor environments and not on large-scale scans of building facades. This paper introduces a method of performing point cloud scene completion of building facades using orthographic projection and generative adversarial inpainting methods. The point cloud is first converted into the 2D structured representation of depth and color images using an orthographic projection approach. Then, a data-driven 2D inpainting approach is used to predict the complete version of the scene, given the incomplete scene in the image domain. The 2D inpainting process is fully automated and uses a customized generative-adversarial network based on Pix2Pix that is trainable end-to-end. The inpainted 2D image is finally converted back into a 3D point cloud using depth remapping. The proposed method is compared against several baseline methods, including geometric methods such as Poisson reconstruction and hole-filling, as well as learning-based methods such as the point completion network (PCN) and TopNet. Performance evaluation is carried out based on the task of reconstructing real-world building facades from partial laser-scanned point clouds. Experimental results using the performance metrics of voxel precision, voxel recall, position error, and color error showed that the proposed method has the best performance overall.

Author(s):  
T. Shinohara ◽  
H. Xiu ◽  
M. Matsuoka

Abstract. This study introduces a novel image to a 3D point-cloud translation method with a conditional generative adversarial network that creates a large-scale 3D point cloud. This can generate supervised point clouds observed via airborne LiDAR from aerial images. The network is composed of an encoder to produce latent features of input images, generator to translate latent features to fake point clouds, and discriminator to classify false or real point clouds. The encoder is a pre-trained ResNet; to overcome the difficulty of generating 3D point clouds in an outdoor scene, we use a FoldingNet with features from ResNet. After a fixed number of iterations, our generator can produce fake point clouds that correspond to the input image. Experimental results show that our network can learn and generate certain point clouds using the data from the 2018 IEEE GRSS Data Fusion Contest.


2021 ◽  
Vol 10 (3) ◽  
pp. 157
Author(s):  
Paul-Mark DiFrancesco ◽  
David A. Bonneau ◽  
D. Jean Hutchinson

Key to the quantification of rockfall hazard is an understanding of its magnitude-frequency behaviour. Remote sensing has allowed for the accurate observation of rockfall activity, with methods being developed for digitally assembling the monitored occurrences into a rockfall database. A prevalent challenge is the quantification of rockfall volume, whilst fully considering the 3D information stored in each of the extracted rockfall point clouds. Surface reconstruction is utilized to construct a 3D digital surface representation, allowing for an estimation of the volume of space that a point cloud occupies. Given various point cloud imperfections, it is difficult for methods to generate digital surface representations of rockfall with detailed geometry and correct topology. In this study, we tested four different computational geometry-based surface reconstruction methods on a database comprised of 3668 rockfalls. The database was derived from a 5-year LiDAR monitoring campaign of an active rock slope in interior British Columbia, Canada. Each method resulted in a different magnitude-frequency distribution of rockfall. The implications of 3D volume estimation were demonstrated utilizing surface mesh visualization, cumulative magnitude-frequency plots, power-law fitting, and projected annual frequencies of rockfall occurrence. The 3D volume estimation methods caused a notable shift in the magnitude-frequency relations, while the power-law scaling parameters remained relatively similar. We determined that the optimal 3D volume calculation approach is a hybrid methodology comprised of the Power Crust reconstruction and the Alpha Solid reconstruction. The Alpha Solid approach is to be used on small-scale point clouds, characterized with high curvatures relative to their sampling density, which challenge the Power Crust sampling assumptions.


Author(s):  
Zhiyong Gao ◽  
Jianhong Xiang

Background: While detecting the object directly from the 3D point cloud, the natural 3D patterns and invariance of 3D data are often obscure. Objective: In this work, we aimed at studying the 3D object detection from discrete, disordered and sparse 3D point clouds. Methods: The CNN is composed of the frustum sequence module, 3D instance segmentation module S-NET, 3D point cloud transformation module T-NET, and 3D boundary box estimation module E-NET. The search space of the object is determined by the frustum sequence module. The instance segmentation of the point cloud is performed by the 3D instance segmentation module. The 3D coordinates of the object are confirmed by the transformation module and the 3D bounding box estimation module. Results: Evaluated on KITTI benchmark dataset, our method outperforms the state of the art by remarkable margins while having real-time capability. Conclusion: We achieve real-time 3D object detection by proposing an improved convolutional neural network (CNN) based on image-driven point clouds.


2021 ◽  
Author(s):  
Ali Mirzazade ◽  
Cosmin Popescu ◽  
Thomas Blanksvärd ◽  
Björn Täljsten

<p>In bridge inspection, vertical displacement is a relevant parameter for both short and long-term health monitoring. Assessing change in deflections could also simplify the assessment work for inspectors. Recent developments in digital camera technology and photogrammetry software enables point cloud with colour information (RGB values) to be generated. Thus, close range photogrammetry offers the potential of monitoring big and small-scale damages by point clouds. The current paper aims to monitor geometrical deviations in Pahtajokk Bridge, Northern Sweden, using an optical data acquisition technique. The bridge in this study is scanned two times by almost one year a part. After point cloud generation the datasets were compared to detect geometrical deviations. First scanning was carried out by both close range photogrammetry (CRP) and terrestrial laser scanning (TLS), while second scanning was performed by CRP only. Analyzing the results has shown the potential of CRP in bridge inspection.</p>


Aerospace ◽  
2018 ◽  
Vol 5 (3) ◽  
pp. 94 ◽  
Author(s):  
Hriday Bavle ◽  
Jose Sanchez-Lopez ◽  
Paloma Puente ◽  
Alejandro Rodriguez-Ramos ◽  
Carlos Sampedro ◽  
...  

This paper presents a fast and robust approach for estimating the flight altitude of multirotor Unmanned Aerial Vehicles (UAVs) using 3D point cloud sensors in cluttered, unstructured, and dynamic indoor environments. The objective is to present a flight altitude estimation algorithm, replacing the conventional sensors such as laser altimeters, barometers, or accelerometers, which have several limitations when used individually. Our proposed algorithm includes two stages: in the first stage, a fast clustering of the measured 3D point cloud data is performed, along with the segmentation of the clustered data into horizontal planes. In the second stage, these segmented horizontal planes are mapped based on the vertical distance with respect to the point cloud sensor frame of reference, in order to provide a robust flight altitude estimation even in presence of several static as well as dynamic ground obstacles. We validate our approach using the IROS 2011 Kinect dataset available in the literature, estimating the altitude of the RGB-D camera using the provided 3D point clouds. We further validate our approach using a point cloud sensor on board a UAV, by means of several autonomous real flights, closing its altitude control loop using the flight altitude estimated by our proposed method, in presence of several different static as well as dynamic ground obstacles. In addition, the implementation of our approach has been integrated in our open-source software framework for aerial robotics called Aerostack.


Author(s):  
W. Ostrowski ◽  
M. Pilarska ◽  
J. Charyton ◽  
K. Bakuła

Creating 3D building models in large scale is becoming more popular and finds many applications. Nowadays, a wide term “3D building models” can be applied to several types of products: well-known CityGML solid models (available on few Levels of Detail), which are mainly generated from Airborne Laser Scanning (ALS) data, as well as 3D mesh models that can be created from both nadir and oblique aerial images. City authorities and national mapping agencies are interested in obtaining the 3D building models. Apart from the completeness of the models, the accuracy aspect is also important. Final accuracy of a building model depends on various factors (accuracy of the source data, complexity of the roof shapes, etc.). In this paper the methodology of inspection of dataset containing 3D models is presented. The proposed approach check all building in dataset with comparison to ALS point clouds testing both: accuracy and level of details. Using analysis of statistical parameters for normal heights for reference point cloud and tested planes and segmentation of point cloud provides the tool that can indicate which building and which roof plane in do not fulfill requirement of model accuracy and detail correctness. Proposed method was tested on two datasets: solid and mesh model.


2021 ◽  
Vol 10 (9) ◽  
pp. 617
Author(s):  
Su Yang ◽  
Miaole Hou ◽  
Ahmed Shaker ◽  
Songnian Li

The digital documentation of cultural relics plays an important role in archiving, protection, and management. In the field of cultural heritage, three-dimensional (3D) point cloud data is effective at expressing complex geometric structures and geometric details on the surface of cultural relics, but lacks semantic information. To elaborate the geometric information of cultural relics and add meaningful semantic information, we propose a modeling and processing method of smart point clouds of cultural relics with complex geometries. An information modeling framework for complex geometric cultural relics was designed based on the concept of smart point clouds, in which 3D point cloud data are organized through the time dimension and different spatial scales indicating different geometric details. The proposed model allows smart point clouds or a subset to be linked with semantic information or related documents. As such, this novel information modeling framework can be used to describe rich semantic information and high-level details of geometry. The proposed information model not only expresses the complex geometric structure of the cultural relics and the geometric details on the surface, but also has rich semantic information, and can even be associated with documents. A case study of the Dazu Thousand-Hand Bodhisattva Statue, which is characterized by a variety of complex geometries, reveals that our proposed framework is capable of modeling and processing the statue with excellent applicability and expansibility. This work provides insights into the sustainable development of cultural heritage protection globally.


2019 ◽  
Vol 12 (1) ◽  
pp. 112 ◽  
Author(s):  
Dong Lin ◽  
Lutz Bannehr ◽  
Christoph Ulrich ◽  
Hans-Gerd Maas

Thermal imagery is widely used in various fields of remote sensing. In this study, a novel processing scheme is developed to process the data acquired by the oblique airborne photogrammetric system AOS-Tx8 consisting of four thermal cameras and four RGB cameras with the goal of large-scale area thermal attribute mapping. In order to merge 3D RGB data and 3D thermal data, registration is conducted in four steps: First, thermal and RGB point clouds are generated independently by applying structure from motion (SfM) photogrammetry to both the thermal and RGB imagery. Next, a coarse point cloud registration is performed by the support of georeferencing data (global positioning system, GPS). Subsequently, a fine point cloud registration is conducted by octree-based iterative closest point (ICP). Finally, three different texture mapping strategies are compared. Experimental results showed that the global image pose refinement outperforms the other two strategies at registration accuracy between thermal imagery and RGB point cloud. Potential building thermal leakages in large areas can be fast detected in the generated texture mapping results. Furthermore, a combination of the proposed workflow and the oblique airborne system allows for a detailed thermal analysis of building roofs and facades.


Sign in / Sign up

Export Citation Format

Share Document