scholarly journals 3D Point Cloud Semantic Modelling: Integrated Framework for Indoor Spaces and Furniture

2018 ◽  
Vol 10 (9) ◽  
pp. 1412 ◽  
Author(s):  
Florent Poux ◽  
Romain Neuville ◽  
Gilles-Antoine Nys ◽  
Roland Billen

3D models derived from point clouds are useful in various shapes to optimize the trade-off between precision and geometric complexity. They are defined at different granularity levels according to each indoor situation. In this article, we present an integrated 3D semantic reconstruction framework that leverages segmented point cloud data and domain ontologies. Our approach follows a part-to-whole conception which models a point cloud in parametric elements usable per instance and aggregated to obtain a global 3D model. We first extract analytic features, object relationships and contextual information to permit better object characterization. Then, we propose a multi-representation modelling mechanism augmented by automatic recognition and fitting from the 3D library ModelNet10 to provide the best candidates for several 3D scans of furniture. Finally, we combine every element to obtain a consistent indoor hybrid 3D model. The method allows a wide range of applications from interior navigation to virtual stores.

Author(s):  
M. Mehranfar ◽  
H. Arefi ◽  
F. Alidoost

Abstract. This paper presents a projection-based method for 3D bridge modeling using dense point clouds generated from drone-based images. The proposed workflow consists of hierarchical steps including point cloud segmentation, modeling of individual elements, and merging of individual models to generate the final 3D model. First, a fuzzy clustering algorithm including the height values and geometrical-spectral features is employed to segment the input point cloud into the main bridge elements. In the next step, a 2D projection-based reconstruction technique is developed to generate a 2D model for each element. Next, the 3D models are reconstructed by extruding the 2D models orthogonally to the projection plane. Finally, the reconstruction process is completed by merging individual 3D models and forming an integrated 3D model of the bridge structure in a CAD format. The results demonstrate the effectiveness of the proposed method to generate 3D models automatically with a median error of about 0.025 m between the elements’ dimensions in the reference and reconstructed models for two different bridge datasets.


Author(s):  
H. Tran ◽  
K. Khoshelham

<p><strong>Abstract.</strong> Automated reconstruction of 3D interior models has recently been a topic of intensive research due to its wide range of applications in Architecture, Engineering, and Construction. However, generation of the 3D models from LiDAR data and/or RGB-D data is challenged by not only the complexity of building geometries, but also the presence of clutters and the inevitable defects of the input data. In this paper, we propose a stochastic approach for automatic reconstruction of 3D models of interior spaces from point clouds, which is applicable to both Manhattan and non-Manhattan world buildings. The building interior is first partitioned into a set of 3D shapes as an arrangement of permanent structures. An optimization process is then applied to search for the most probable model as the optimal configuration of the 3D shapes using the reversible jump Markov Chain Monte Carlo (rjMCMC) sampling with the Metropolis-Hastings algorithm. This optimization is not based only on the input data, but also takes into account the intermediate stages of the model during the modelling process. Consequently, it enhances the robustness of the proposed approach to inaccuracy and incompleteness of the point cloud. The feasibility of the proposed approach is evaluated on a synthetic and an ISPRS benchmark dataset.</p>


Author(s):  
M. Weinmann ◽  
A. Schmidt ◽  
C. Mallet ◽  
S. Hinz ◽  
F. Rottensteiner ◽  
...  

The fully automated analysis of 3D point clouds is of great importance in photogrammetry, remote sensing and computer vision. For reliably extracting objects such as buildings, road inventory or vegetation, many approaches rely on the results of a point cloud classification, where each 3D point is assigned a respective semantic class label. Such an assignment, in turn, typically involves statistical methods for feature extraction and machine learning. Whereas the different components in the processing workflow have extensively, but separately been investigated in recent years, the respective connection by sharing the results of crucial tasks across all components has not yet been addressed. This connection not only encapsulates the interrelated issues of neighborhood selection and feature extraction, but also the issue of how to involve spatial context in the classification step. In this paper, we present a novel and generic approach for 3D scene analysis which relies on (<i>i</i>) individually optimized 3D neighborhoods for (<i>ii</i>) the extraction of distinctive geometric features and (<i>iii</i>) the contextual classification of point cloud data. For a labeled benchmark dataset, we demonstrate the beneficial impact of involving contextual information in the classification process and that using individual 3D neighborhoods of optimal size significantly increases the quality of the results for both pointwise and contextual classification.


2020 ◽  
Vol 9 (10) ◽  
pp. 588
Author(s):  
Florent Poux ◽  
Roland Billen ◽  
Jean-Paul Kasprzyk ◽  
Pierre-Henri Lefebvre ◽  
Pierre Hallot

The digital management of an archaeological site requires to store, organise, access and represent all the information that is collected on the field. Heritage building information modelling, archaeological or heritage information systems now tend to propose a common framework where all the materials are managed from a central database and visualised through a 3D representation. In this research, we offer the development of a built heritage information system prototype based on a high-resolution 3D point cloud data set. The particularity of the approach is to consider a user-centred development methodology while avoiding meshing/down-sampling operations. The proposed system is initiated by a close collaboration between multi-modal users (managers, visitors, curators) and a development team (designers, developers, architects). The developed heritage information system permits the management of spatial and temporal information, including a wide range of semantics using relational along with NoSQL databases. The semantics used to describe the artifacts are subject to conceptual modelling. Finally, the system proposes a bi-directional communication with a 3D interface able to stream massive point clouds, which is a big step forward to provide a comprehensive site representation for stakeholders while minimising modelling costs.


2019 ◽  
Vol 11 (22) ◽  
pp. 2600 ◽  
Author(s):  
Ruizhuo Zhang ◽  
Bisheng Yang ◽  
Wen Xiao ◽  
Fuxun Liang ◽  
Yang Liu ◽  
...  

Electric power transmission and maintenance is essential for the power industry. This paper proposes a method for the efficient extraction and classification of three-dimensional (3D) targets of electric power transmission facilities based on regularized grid characteristics computed from point cloud data acquired by unmanned aerial vehicles (UAVs). First, a special hashing matrix was constructed to store the point cloud after noise removal by a statistical method, which calculated the local distribution characteristics of the points within each sparse grid. Secondly, power lines were extracted by neighboring grids’ height similarity estimation and linear feature clustering. Thirdly, by analyzing features of the grid in the horizontal and vertical directions, the transmission towers in candidate tower areas were identified. The pylon center was then determined by a vertical slicing analysis. Finally, optimization was carried out, considering the topological relationship between the line segments and pylons to refine the extraction. Experimental results showed that the proposed method was able to efficiently obtain accurate coordinates of pylon and attachments in the massive point data and to produce a reliable segmentation with an overall precision of 97%. The optimized algorithm was capable of eliminating interference from isolated tall trees and communication signal poles. The 3D geo-information of high-voltage (HV) power lines, pylons, conductors thus extracted, and of further reconstructed 3D models can provide valuable foundations for UAV remote-sensing inspection and corridor safety maintenance.


Author(s):  
J. Sanchez ◽  
F. Denis ◽  
F. Dupont ◽  
L. Trassoudaine ◽  
P. Checchin

Abstract. This paper deals with 3D modeling of building interiors from point clouds captured by a 3D LiDAR scanner. Indeed, currently, the building reconstruction processes remain mostly manual. While LiDAR data have some specific properties which make the reconstruction challenging (anisotropy, noise, clutters, etc.), the automatic methods of the state-of-the-art rely on numerous construction hypotheses which yield 3D models relatively far from initial data. The choice has been done to propose a new modeling method closer to point cloud data, reconstructing only scanned areas of each scene and excluding occluded regions. According to this objective, our approach reconstructs LiDAR scans individually using connected polygons. This modeling relies on a joint processing of an image created from the 2D LiDAR angular sampling and the 3D point cloud associated to one scan. Results are evaluated on synthetic and real data to demonstrate the efficiency as well as the technical strength of the proposed method.


2021 ◽  
Vol 13 (10) ◽  
pp. 1947
Author(s):  
Yuanzhi Cai ◽  
Lei Fan

Recent years have witnessed an increasing use of 3D models in general and 3D geometric models specifically of built environment for various applications, owing to the advancement of mapping techniques for accurate 3D information. Depending on the application scenarios, there exist various types of approaches to automate the construction of 3D building geometry. However, in those studies, less attention has been paid to watertight geometries derived from point cloud data, which are of use to the management and the simulations of building energy. To this end, an efficient reconstruction approach was introduced in this study and involves the following key steps. The point cloud data are first voxelised for the ray-casting analysis to obtain the 3D indoor space. By projecting it onto a horizontal plane, an image representing the indoor area is obtained and is used for the room segmentation. The 2D boundary of each room candidate is extracted using new grammar rules and is extruded using the room height to generate 3D models of individual room candidates. The room connection analyses are applied to the individual models obtained to determine the locations of doors and the topological relations between adjacent room candidates for forming an integrated and watertight geometric model. The approach proposed was tested using the point cloud data representing six building sites of distinct spatial confirmations of rooms, corridors and openings. The experimental results showed that accurate watertight building geometries were successfully created. The average differences between the point cloud data and the geometric models obtained were found to range from 12 to 21 mm. The maximum computation time taken was less than 5 min for the point cloud of approximately 469 million data points, more efficient than the typical reconstruction methods in the literature.


Author(s):  
D. Laksono ◽  
T. Aditya ◽  
G. Riyadi

Abstract. Developing a 3D city model is always a challenging task, whether on how to obtain the 3D data or how to present the model to users. Lidar is often used to produce real-world measurement, resulting in point clouds which further processed into a 3D model. However, this method possesses some limitation, e.g. tedious, expensive works and high technicalities, which limits its usability in a smaller area. Currently, there exists pipeline utilize point-clouds from Lidar data to automate the generation of 3D city model. For example, 3dfier (http://github.com/tudelft3d/3dfier) is a software capable of generating LoD 1 3D city model from lidar point cloud data. The resulting CityGML file could further be used in a 3D GIS viewer to produce an interactive 3D city model. This research proposed the use of Structure from Motion (SfM) method to obtain point cloud from UAV data. Using SfM to generate point clouds means cheaper and shorter production time, as well as more suitable for smaller area compared to LiDAR. 3Dfier could be utilized to produce 3D model from the point cloud. Subsequently, a game engine, i.e. Unity 3D, is utilized as the visualization platform. Previous works shows that a game engine could be used as an interactive environment for exploring virtual world based on real-world measurement and other data, such as parcel boundaries. This works shows that the process of generating 3D city model could be achieved using the proposed pipeline.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2021 ◽  
Vol 13 (11) ◽  
pp. 2195
Author(s):  
Shiming Li ◽  
Xuming Ge ◽  
Shengfu Li ◽  
Bo Xu ◽  
Zhendong Wang

Today, mobile laser scanning and oblique photogrammetry are two standard urban remote sensing acquisition methods, and the cross-source point-cloud data obtained using these methods have significant differences and complementarity. Accurate co-registration can make up for the limitations of a single data source, but many existing registration methods face critical challenges. Therefore, in this paper, we propose a systematic incremental registration method that can successfully register MLS and photogrammetric point clouds in the presence of a large number of missing data, large variations in point density, and scale differences. The robustness of this method is due to its elimination of noise in the extracted linear features and its 2D incremental registration strategy. There are three main contributions of our work: (1) the development of an end-to-end automatic cross-source point-cloud registration method; (2) a way to effectively extract the linear feature and restore the scale; and (3) an incremental registration strategy that simplifies the complex registration process. The experimental results show that this method can successfully achieve cross-source data registration, while other methods have difficulty obtaining satisfactory registration results efficiently. Moreover, this method can be extended to more point-cloud sources.


Sign in / Sign up

Export Citation Format

Share Document