scholarly journals Semantic Geometric Modelling of Unstructured Indoor Point Cloud

2018 ◽  
Vol 8 (1) ◽  
pp. 9 ◽  
Author(s):  
Wenzhong Shi ◽  
Wael Ahmed ◽  
Na Li ◽  
Wenzheng Fan ◽  
Haodong Xiang ◽  
...  

A method capable of automatically reconstructing 3D building models with semantic information from the unstructured 3D point cloud of indoor scenes is presented in this paper. This method has three main steps: 3D segmentation using a new hybrid algorithm, room layout reconstruction, and wall-surface object reconstruction by using an enriched approach. Unlike existing methods, this method aims to detect, cluster, and model complex structures without having prior scanner or trajectory information. In addition, this method enables the accurate detection of wall-surface “defacements”, such as windows, doors, and virtual openings. In addition to the detection of wall-surface apertures, the detection of closed objects, such as doors, is also possible. Hence, for the first time, the whole 3D modelling process of the indoor scene from a backpack laser scanner (BLS) dataset was achieved and is recorded for the first time. This novel method was validated using both synthetic data and real data acquired by a developed BLS system for indoor scenes. Evaluating our approach on synthetic datasets achieved a precision of around 94% and a recall of around 97%, while for BLS datasets our approach achieved a precision of around 95% and a recall of around 89%. The results reveal this novel method to be robust and accurate for 3D indoor modelling.

Author(s):  
H. Huang ◽  
H. Jiang ◽  
C. Brenner ◽  
H. Mayer

We propose a novel method to segment Microsoft™Kinect data of indoor scenes with the emphasis on freeform objects. We use the full 3D information for the scene parsing and the segmentation of potential objects instead of treating the depth values as an additional channel of the 2D image. The raw RGBD image is first converted to a 3D point cloud with color. We then group the points into patches, which are derived from a 2D superpixel segmentation. With the assumption that every patch in the point cloud represents (a part of) the surface of an underlying solid body, a hypothetical quasi-3D model – the "synthetic volume primitive" (SVP) is constructed by extending the patch with a synthetic extrusion in 3D. The SVPs vote for a common object via intersection. By this means, a freeform object can be "assembled" from an unknown number of SVPs from arbitrary angles. Besides the intersection, two other criteria, i.e., coplanarity and color coherence, are integrated in the global optimization to improve the segmentation. Experiments demonstrate the potential of the proposed method.


Author(s):  
Bernardo Lourenço ◽  
Tiago Madeira ◽  
Paulo Dias ◽  
Vitor M. Ferreira Santos ◽  
Miguel Oliveira

Purpose 2D laser rangefinders (LRFs) are commonly used sensors in the field of robotics, as they provide accurate range measurements with high angular resolution. These sensors can be coupled with mechanical units which, by granting an additional degree of freedom to the movement of the LRF, enable the 3D perception of a scene. To be successful, this reconstruction procedure requires to evaluate with high accuracy the extrinsic transformation between the LRF and the motorized system. Design/methodology/approach In this work, a calibration procedure is proposed to evaluate this transformation. The method does not require a predefined marker (commonly used despite its numerous disadvantages), as it uses planar features in the point acquired clouds. Findings Qualitative inspections show that the proposed method reduces artifacts significantly, which typically appear in point clouds because of inaccurate calibrations. Furthermore, quantitative results and comparisons with a high-resolution 3D scanner demonstrate that the calibrated point cloud represents the geometries present in the scene with much higher accuracy than with the un-calibrated point cloud. Practical implications The last key point of this work is the comparison of two laser scanners: the lemonbot (authors’) and a commercial FARO scanner. Despite being almost ten times cheaper, the laser scanner was able to achieve similar results in terms of geometric accuracy. Originality/value This work describes a novel calibration technique that is easy to implement and is able to achieve accurate results. One of its key features is the use of planes to calibrate the extrinsic transformation.


Author(s):  
Sevan Goenezen ◽  
Maulik C Kotecha ◽  
Junuthula N Reddy

Polycrystalline materials consist of grains (crystals) oriented at different angles resulting in a heterogeneous and anisotropic mechanical behavior at that micro-length scale. In this study, a novel method is proposed for the first time to determine the [Formula: see text] crystal orientations of grains in a [Formula: see text] domain, using solely [Formula: see text] deformation fields. The grain boundaries are assumed to be unknown and delineated from the reconstructed changes in the crystallographic orientation. Further, the constitutive equations that describe the mechanical behavior of the domain in [Formula: see text] under plane stress conditions are derived, assuming that the material is transversely isotropic in 3D. Finite element based algorithms are utilized to discretize the inverse problem. The in-house written inverse problem solver is coupled with Matlab-based optimization scripts to solve for the mechanical property distributions. The performance of this method is tested at different noise levels with synthetic displacements that were used as measured data. The reconstructions deteriorate as the noise level is increased. This work presents a first milestone in the verification of this novel technology with synthetic data.


2018 ◽  
Vol 42 (3) ◽  
pp. 457-467 ◽  
Author(s):  
A. N. Kamaev ◽  
D. A. Karmanov

A task of autonomous underwater vehicle (AUV) navigation is considered in the paper. The images obtained from an onboard stereo camera are used to build point clouds attached to a particular AUV position. Quantized SIFT descriptors of points are stored in a metric tree to organize an effective search procedure using a best bin first approach. Correspondences for a new point cloud are searched in a compact group of point clouds that have the largest number of similar descriptors stored in the tree. The new point cloud can be positioned relative to the other clouds without any prior information about the AUV position and uncertainty of this position. This approach increases the reliability of the AUV navigation system and makes it insensitive to data losses, textureless seafloor regions and long passes without trajectory intersections. Several algorithms are described in the paper: an algorithm of point clouds computation, an algorithm for establishing point clouds correspondence, and an algorithm of building groups of potentially linked point clouds to speedup the global search of correspondences. The general navigation algorithm consisting of three parallel subroutines: image adding, search tree updating, and global optimization is also presented. The proposed navigation system is tested on real and synthetic data. Tests on real data showed that the trajectory can be built even for an image sequence with 60% data losses with successive images that have either small or zero overlap. Tests on synthetic data showed that the constructed trajectory is close to the true one even for long missions. The average speed of image processing by the proposed navigation system is about 3 frames per second with  a middle-price desktop CPU.


2020 ◽  
Vol 12 (8) ◽  
pp. 1240 ◽  
Author(s):  
Xabier Blanch ◽  
Antonio Abellan ◽  
Marta Guinau

The emerging use of photogrammetric point clouds in three-dimensional (3D) monitoring processes has revealed some constraints with respect to the use of LiDAR point clouds. Oftentimes, point clouds (PC) obtained by time-lapse photogrammetry have lower density and precision, especially when Ground Control Points (GCPs) are not available or the camera system cannot be properly calibrated. This paper presents a new workflow called Point Cloud Stacking (PCStacking) that overcomes these restrictions by making the most of the iterative solutions in both camera position estimation and internal calibration parameters that are obtained during bundle adjustment. The basic principle of the stacking algorithm is straightforward: it computes the median of the Z coordinates of each point for multiple photogrammetric models to give a resulting PC with a greater precision than any of the individual PC. The different models are reconstructed from images taken simultaneously from, at least, five points of view, reducing the systematic errors associated with the photogrammetric reconstruction workflow. The algorithm was tested using both a synthetic point cloud and a real 3D dataset from a rock cliff. The synthetic data were created using mathematical functions that attempt to emulate the photogrammetric models. Real data were obtained by very low-cost photogrammetric systems specially developed for this experiment. Resulting point clouds were improved when applying the algorithm in synthetic and real experiments, e.g., 25th and 75th error percentiles were reduced from 3.2 cm to 1.4 cm in synthetic tests and from 1.5 cm to 0.5 cm in real conditions.


2021 ◽  
Vol 10 (3) ◽  
pp. 181
Author(s):  
Antonio Gámiz-Gordo ◽  
Juan Cantizani-Oliva ◽  
Juan Francisco Reinoso-Gordo

The work of Philibert Girault de Prangey, who was a draughtsman, pioneering photographer and an Islamic architecture scholar, has been the subject of recent exhibitions in his hometown (Langres, 2019), at the Metropolitan Museum (New York, 2019) and at the Musée d’Orsay (Paris, 2020). After visiting Andalusia between 1832 and 1833, Prangey completed the publication “Monuments arabes et moresques de Cordoue, Seville et Grenada” in 1839, based on his own drawings and measurements. For the first time, this research analyses his interior perspectives of the Mosque-Cathedral of Cordoba (Spain). The novel methodology is based on its comparison with a digital model derived from the point cloud captured by a 3D laser scanner. After locating the different viewpoints, the geometric precision and the elaboration process are analysed, taking into account historic images by various authors, other details published by Prangey and the architectural transformations of the building. In this way, the veracity and documentary interest of some beautiful perspectives of a monument inscribed on the World Heritage List by UNESCO is valued.


2021 ◽  
Vol 11 (4) ◽  
pp. 1465
Author(s):  
Rocio Mora ◽  
Jose Antonio Martín-Jiménez ◽  
Susana Lagüela ◽  
Diego González-Aguilera

Total and automatic digitalization of indoor spaces in 3D implies a great advance in building maintenance and construction tasks, which currently require visits and manual works. Terrestrial laser scanners (TLS) have been widely used for these tasks, although the acquisition methodology with TLS systems is time consuming, and each point cloud is acquired in a different coordinate system, so the user has to post-process the data to clean and get a unique point cloud of the whole scenario. This paper presents a solution for the automatic data acquisition and registration of point clouds from indoor scenes, designed for point clouds acquired with a terrestrial laser scanner (TLS) mounted on an unmanned ground vehicle (UGV). The methodology developed allows the generation of one complete dense 3D point cloud consisting of the acquired point clouds registered in the same coordinate system, reaching an accuracy below 1 cm in section dimensions and below 1.5 cm in walls thickness, which makes it valid for quality control in building works. Two different study cases corresponding to building works were chosen for the validation of the method, showing the applicability of the methodology developed for tasks related to the control of the evolution of the construction.


2020 ◽  
Vol 13 (12) ◽  
pp. 6361-6381
Author(s):  
Marisol Monterrubio-Velasco ◽  
F. Ramón Zúñiga ◽  
Quetzalcoatl Rodríguez-Pérez ◽  
Otilio Rojas ◽  
Armando Aguilar-Meléndez ◽  
...  

Abstract. Seismicity and magnitude distributions are fundamental for seismic hazard analysis. The Mexican subduction margin along the Pacific Coast is one of the most active seismic zones in the world, which makes it an optimal region for observation and experimentation analyses. Some remarkable seismicity features have been observed on a subvolume of this subduction region, suggesting that the observed simplicity of earthquake sources arises from the rupturing of single asperities. This subregion has been named SUB3 in a recent seismotectonic regionalization of Mexico. In this work, we numerically test this hypothesis using the TREMOL (sThochastic Rupture Earthquake MOdeL) v0.1.0 code. As test cases, we choose four of the most significant recent events (6.5 < Mw < 7.8) that occurred in the Guerrero–Oaxaca region (SUB3) during the period 1988–2018, and whose associated seismic histories are well recorded in the regional catalogs. Synthetic seismicity results show a reasonable fit to the real data, which improves when the available data from the real events increase. These results give support to the hypothesis that single-asperity ruptures are a distinctive feature that controls seismicity in SUB3. Moreover, a fault aspect ratio sensitivity analysis is carried out to study how the synthetic seismicity varies. Our results indicate that asperity shape is an important modeling parameter controlling the frequency–magnitude distribution of synthetic data. Therefore, TREMOL provides appropriate means to model complex seismicity curves, such as those observed in the SUB3 region, and highlights its usefulness as a tool to shed additional light on the earthquake process.


Author(s):  
E. V. Shalnov ◽  
A. S. Konushin

Known scene geometry and camera calibration parameters give important information to video content analysis systems. In this paper, we propose a novel method for camera pose estimation based on people observation in the input video captured by static camera. As opposed to previous techniques, our method can deal with false positive detections and inaccurate localization results. Specifically, the proposed method does not make any assumption about the utilized object detector and takes it as a parameter. Moreover, we do not require a huge labeled dataset of real data and train on the synthetic data only. We apply the proposed technique for camera pose estimation based on head observations. Our experiments show that the algorithm trained on the synthetic dataset generalizes to real data and is robust to false positive detections.


2020 ◽  
Vol 12 (16) ◽  
pp. 6556
Author(s):  
Antonio Gámiz-Gordo ◽  
Ignacio Ferrer-Pérez-Blanco ◽  
Juan Francisco Reinoso-Gordo

This research documents and graphically analyzes the pavilions muqarnas at the Court of the Lions in the Alhambra in Granada, a World Heritage Site. In order to cast some light on the understanding and preservation of these 14th century architectural elements, after a brief report of historical data on catastrophes and restorations, a novel methodology for the case study based on three complementary graphic analyses is presented here: First, there is a review of outstanding images ranging from the 17th to the 20th centuries; subsequently, new CAD (computer-aided design) drawings from pavilions muqarnas testing the theoretic principles from their geometric grouping are accomplished for the first time; and finally, a 3D laser scanner is used to understand the precise present-day state from the point cloud obtained. Comparing drawings allows us to assess the muqarnas relevance while proving, for the first time, that the muqarnas of both pavilions have distinct configurations and different amounts of pieces. Besides, this process reveals geometric deformations existing in the original Nasrid muqarnas compositions, identifying small pieces hitherto unknown, plus additional deformations resulting from adjustments after important threats that both pavilions and their muqarnas overcame for centuries, despite their fragile construction.


Sign in / Sign up

Export Citation Format

Share Document