scholarly journals Improving the Accuracy of Automatic Reconstruction of 3D Complex Buildings Models from Airborne Lidar Point Clouds

2020 ◽  
Vol 12 (10) ◽  
pp. 1643 ◽  
Author(s):  
Marek Kulawiak ◽  
Zbigniew Lubniewski

Due to high requirements of variety of 3D spatial data applications with respect to data amount and quality, automatized, efficient and reliable data acquisition and preprocessing methods are needed. The use of photogrammetry techniques—as well as the light detection and ranging (LiDAR) automatic scanners—are among attractive solutions. However, measurement data are in the form of unorganized point clouds, usually requiring transformation to higher order 3D models based on polygons or polyhedral surfaces, which is not a trivial process. The study presents a newly developed algorithm for correcting 3D point cloud data from airborne LiDAR surveys of regular 3D buildings. The proposed approach assumes the application of a sequence of operations resulting in 3D rasterization, i.e., creation and processing of a 3D regular grid representation of an object, prior to applying a regular Poisson surface reconstruction method. In order to verify the accuracy and quality of reconstructed objects for quantitative comparison with the obtained 3D models, high-quality ground truth models were used in the form of the meshes constructed from photogrammetric measurements and manually made using buildings architectural plans. The presented results show that applying the proposed algorithm positively influences the quality of the results and can be used in combination with existing surface reconstruction methods in order to generate more detailed 3D models from LiDAR scanning.

2021 ◽  
Vol 13 (10) ◽  
pp. 1947
Author(s):  
Yuanzhi Cai ◽  
Lei Fan

Recent years have witnessed an increasing use of 3D models in general and 3D geometric models specifically of built environment for various applications, owing to the advancement of mapping techniques for accurate 3D information. Depending on the application scenarios, there exist various types of approaches to automate the construction of 3D building geometry. However, in those studies, less attention has been paid to watertight geometries derived from point cloud data, which are of use to the management and the simulations of building energy. To this end, an efficient reconstruction approach was introduced in this study and involves the following key steps. The point cloud data are first voxelised for the ray-casting analysis to obtain the 3D indoor space. By projecting it onto a horizontal plane, an image representing the indoor area is obtained and is used for the room segmentation. The 2D boundary of each room candidate is extracted using new grammar rules and is extruded using the room height to generate 3D models of individual room candidates. The room connection analyses are applied to the individual models obtained to determine the locations of doors and the topological relations between adjacent room candidates for forming an integrated and watertight geometric model. The approach proposed was tested using the point cloud data representing six building sites of distinct spatial confirmations of rooms, corridors and openings. The experimental results showed that accurate watertight building geometries were successfully created. The average differences between the point cloud data and the geometric models obtained were found to range from 12 to 21 mm. The maximum computation time taken was less than 5 min for the point cloud of approximately 469 million data points, more efficient than the typical reconstruction methods in the literature.


2021 ◽  
Vol 10 (3) ◽  
pp. 157
Author(s):  
Paul-Mark DiFrancesco ◽  
David A. Bonneau ◽  
D. Jean Hutchinson

Key to the quantification of rockfall hazard is an understanding of its magnitude-frequency behaviour. Remote sensing has allowed for the accurate observation of rockfall activity, with methods being developed for digitally assembling the monitored occurrences into a rockfall database. A prevalent challenge is the quantification of rockfall volume, whilst fully considering the 3D information stored in each of the extracted rockfall point clouds. Surface reconstruction is utilized to construct a 3D digital surface representation, allowing for an estimation of the volume of space that a point cloud occupies. Given various point cloud imperfections, it is difficult for methods to generate digital surface representations of rockfall with detailed geometry and correct topology. In this study, we tested four different computational geometry-based surface reconstruction methods on a database comprised of 3668 rockfalls. The database was derived from a 5-year LiDAR monitoring campaign of an active rock slope in interior British Columbia, Canada. Each method resulted in a different magnitude-frequency distribution of rockfall. The implications of 3D volume estimation were demonstrated utilizing surface mesh visualization, cumulative magnitude-frequency plots, power-law fitting, and projected annual frequencies of rockfall occurrence. The 3D volume estimation methods caused a notable shift in the magnitude-frequency relations, while the power-law scaling parameters remained relatively similar. We determined that the optimal 3D volume calculation approach is a hybrid methodology comprised of the Power Crust reconstruction and the Alpha Solid reconstruction. The Alpha Solid approach is to be used on small-scale point clouds, characterized with high curvatures relative to their sampling density, which challenge the Power Crust sampling assumptions.


2019 ◽  
Author(s):  
Dejun Yang ◽  
Changming Wang ◽  
Hongbing Fu ◽  
Ziran Wei ◽  
Xin Zhang ◽  
...  

Abstract Background and Aims Routine gastroesophagostomy has been shown to have adverse effects on the recovery of digestive functions and quality of life because patients typically experience reflux symptoms after proximal gastrectomy. This study was performed to assess the feasibility and quality of life benefits of a novel reconstruction method termed Roux-en-Y anastomosis plus antral obstruction (RYAO) following proximal partial gastrectomy. Methods A total of 73 patients who underwent proximal gastrectomy from June 2015 to June 2017 were divided into two groups according to digestive reconstruction methods [RYAO (37 patients) and conventional esophagogastric anastomosis with pyloroplasty (EGPP, 36 patients)]. Clinical data were compared between the two groups retrospectively. Results The mean operative time for digestive reconstruction was slightly longer in the RYAO group than in the EGPP group. However, the incidence of postoperative short-term complications did not differ between the RYAO and the EGPP groups. At the 6-month follow-up, the incidence rates of both reflux esophagitis and gastritis were lower in the RYAO group than in the EGPP group (P = 0.002). Additionally, body weight recovery was better in the RYAO group (P = 0.028). The scale tests indicated that compared with the patients in the EGPP group, the patients in the RYAO group had significantly reduced reflux, nausea and vomiting and reported improvements in their overall health status and quality of life (all P < 0.05). Conclusion RYAO reconstruction may be a feasible procedure to reduce postoperative reflux symptoms and the incidence of reflux esophagitis and gastritis, thus improving patient quality of life after proximal gastrectomy.


Author(s):  
Z. Li ◽  
W. Zhang ◽  
J. Shan

Abstract. Building models are conventionally reconstructed by building roof points via planar segmentation and then using a topology graph to group the planes together. Roof edges and vertices are then mathematically represented by intersecting segmented planes. Technically, such solution is based on sequential local fitting, i.e., the entire data of one building are not simultaneously participating in determining the building model. As a consequence, the solution is lack of topological integrity and geometric rigor. Fundamentally different from this traditional approach, we propose a holistic parametric reconstruction method which means taking into consideration the entire point clouds of one building simultaneously. In our work, building models are reconstructed from predefined parametric (roof) primitives. We first use a well-designed deep neural network to segment and identify primitives in the given building point clouds. A holistic optimization strategy is then introduced to simultaneously determine the parameters of a segmented primitive. In the last step, the optimal parameters are used to generate a watertight building model in CityGML format. The airborne LiDAR dataset RoofN3D with predefined roof types is used for our test. It is shown that PointNet++ applied to the entire dataset can achieve an accuracy of 83% for primitive classification. For a subset of 910 buildings in RoofN3D, the holistic approach is then used to determine the parameters of primitives and reconstruct the buildings. The achieved overall quality of reconstruction is 0.08 meters for point-surface-distance or 0.7 times RMSE of the input LiDAR points. This study demonstrates the efficiency and capability of the proposed approach and its potential to handle large scale urban point clouds.


2019 ◽  
Vol 2019 ◽  
pp. 1-18 ◽  
Author(s):  
So-Young Park ◽  
Dae Geon Lee ◽  
Eun Jin Yoo ◽  
Dong-Cheon Lee

Light detection and ranging (LiDAR) data collected from airborne laser scanning systems are one of the major sources of spatial data. Airborne laser scanning systems have the capacity for rapid and direct acquisition of accurate 3D coordinates. Use of LiDAR data is increasing in various applications, such as topographic mapping, building and city modeling, biomass measurement, and disaster management. Segmentation is a crucial process in the extraction of meaningful information for applications such as 3D object modeling and surface reconstruction. Most LiDAR processing schemes are based on digital image processing and computer vision algorithms. This paper introduces a shape descriptor method for segmenting LiDAR point clouds using a “multilevel cube code” that is an extension of the 2D chain code to 3D space. The cube operator segments point clouds into roof surface patches, including superstructures, removes unnecessary objects, detects the boundaries of buildings, and determines model key points for building modeling. Both real and simulated LiDAR data were used to verify the proposed approach. The experiments demonstrated the feasibility of the method for segmenting LiDAR data from buildings with a wide range of roof types. The method was found to segment point cloud data effectively.


2019 ◽  
Vol 11 (10) ◽  
pp. 1204 ◽  
Author(s):  
Yue Pan ◽  
Yiqing Dong ◽  
Dalei Wang ◽  
Airong Chen ◽  
Zhen Ye

Three-dimensional (3D) digital technology is essential to the maintenance and monitoring of cultural heritage sites. In the field of bridge engineering, 3D models generated from point clouds of existing bridges is drawing increasing attention. Currently, the widespread use of the unmanned aerial vehicle (UAV) provides a practical solution for generating 3D point clouds as well as models, which can drastically reduce the manual effort and cost involved. In this study, we present a semi-automated framework for generating structural surface models of heritage bridges. To be specific, we propose to tackle this challenge via a novel top-down method for segmenting main bridge components, combined with rule-based classification, to produce labeled 3D models from UAV photogrammetric point clouds. The point clouds of the heritage bridge are generated from the captured UAV images through the structure-from-motion workflow. A segmentation method is developed based on the supervoxel structure and global graph optimization, which can effectively separate bridge components based on geometric features. Then, recognition by the use of a classification tree and bridge geometry is utilized to recognize different structural elements from the obtained segments. Finally, surface modeling is conducted to generate surface models of the recognized elements. Experiments using two bridges in China demonstrate the potential of the presented structural model reconstruction method using UAV photogrammetry and point cloud processing in 3D digital documentation of heritage bridges. By using given markers, the reconstruction error of point clouds can be as small as 0.4%. Moreover, the precision and recall of segmentation results using testing date are better than 0.8, and a recognition accuracy better than 0.8 is achieved.


Author(s):  
V. Paleček ◽  
P. Kubíček

A large increase in the creation of 3D models of objects all around us can be observed in the last few years; thanks to the help of the rapid development of new advanced technologies for spatial data collection and robust software tools. A new commercially available airborne laser scanning data in Czech Republic, provided in the form of the Digital terrain model of the fifth generation as irregularly spaced points, enable locating the majority of rock formations. However, the positional and height accuracy of this type of landforms can reach huge errors in some cases. Therefore, it is necessary to start mapping using terrestrial laser scanning with the possibility of adding a point cloud data derived from ground or aerial photogrammetry. Intensity correction and noise removal is usually based on the distance between measured objects and the laser scanner, the incidence angle of the beam or on the radiometric and topographic characteristics of measured objects. This contribution represents the major undesirable effects that affect the quality of acquisition and processing of laser scanning data. Likewise there is introduced solutions to some of these problems.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Ryuhei Ando ◽  
Yuko Ozasa ◽  
Wei Guo

The automation of plant phenotyping using 3D imaging techniques is indispensable. However, conventional methods for reconstructing the leaf surface from 3D point clouds have a trade-off between the accuracy of leaf surface reconstruction and the method’s robustness against noise and missing points. To mitigate this trade-off, we developed a leaf surface reconstruction method that reduces the effects of noise and missing points while maintaining surface reconstruction accuracy by capturing two components of the leaf (the shape and distortion of that shape) separately using leaf-specific properties. This separation simplifies leaf surface reconstruction compared with conventional methods while increasing the robustness against noise and missing points. To evaluate the proposed method, we reconstructed the leaf surfaces from 3D point clouds of leaves acquired from two crop species (soybean and sugar beet) and compared the results with those of conventional methods. The result showed that the proposed method robustly reconstructed the leaf surfaces, despite the noise and missing points for two different leaf shapes. To evaluate the stability of the leaf surface reconstructions, we also calculated the leaf surface areas for 14 consecutive days of the target leaves. The result derived from the proposed method showed less variation of values and fewer outliers compared with the conventional methods.


Author(s):  
F. Poux ◽  
R. Neuville ◽  
P. Hallot ◽  
R. Billen

This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.


2021 ◽  
Vol 11 (19) ◽  
pp. 9065
Author(s):  
Myungjin Choi ◽  
Jee-Hyeok Park ◽  
Qimeng Zhang ◽  
Byeung-Sun Hong ◽  
Chang-Hun Kim

We propose a novel method for addressing the problem of efficiently generating a highly refined normal map for screen-space fluid rendering. Because the process of filtering the normal map is crucially important to ensure the quality of the final screen-space fluid rendering, we employ a conditional generative adversarial network (cGAN) as a filter that learns a deep normal map representation, thereby refining the low-quality normal map. In particular, we have designed a novel loss function dedicated to refining the normal map information, and we use a specific set of auxiliary features to train the cGAN generator to learn features that are more robust with respect to edge details. Additionally, we constructed a dataset of six different typical scenes to enable effective demonstrations of multitype fluid simulation. Experiments indicated that our generator was able to infer clearer and more detailed features for this dataset than a basic screen-space fluid rendering method. Moreover, in some cases, the results generated by our method were even smoother than those generated by the conventional surface reconstruction method. Our method improves the fluid rendering results via the high-quality normal map while preserving the advantages of the screen-space fluid rendering methods and the traditional surface reconstruction methods, including that of the computation time being independent of the number of simulation particles and the spatial resolution being related only to image resolution.


Sign in / Sign up

Export Citation Format

Share Document